UIGraphicsGetCurrentContext() short lifetime - ios

I have a view which implements freehand drawing, but I have a small problem. I noticed on the iPad 3 that everything went to hell, so I tried to update my drawing code (probably as I should have done in the first place) to only update the portion that was stroked. However, the first stroke after open, and the first stroke after about 10 seconds of idle are extremely slow. After everything is "warmed up" it is smooth as butter and only takes about 0.15ms per drawRect. I don't know why, but the whole view rectangle is getting marked as dirty for the first drawRect, and the first drawRect after idle (then it takes about 150 ms to update). The stack trace shows that my rectangle is being overridden by CABackingStoreUpdate_
I tried not drawing the layer if the rectangle was huge, but then my entire context goes blank (will reappear as I draw over the old areas like a lotto ticket). Does anyone have any idea what goes on with UIGraphicsGetCurrentContext()? That's the only place I can imagine the trouble is. That is, my views context got yanked by the context genie so it needs to render itself fully again. Is there any setting I can use to persist the same context? Or is there something else going on here...there is no need for it to update the full rectangle after the initial display.
My drawRect is very simple:
- (void)drawRect:(CGRect)rect
{
CGContextRef c = mDrawingLayer ? CGLayerGetContext(mDrawingLayer) : NULL;
if(!mDrawingLayer)
{
c = UIGraphicsGetCurrentContext();
mDrawingLayer = CGLayerCreateWithContext(c, self.bounds.size, NULL);
c = CGLayerGetContext(mDrawingLayer);
CGContextSetAllowsAntialiasing(c, true);
CGContextSetShouldAntialias(c, true);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineJoin(c, kCGLineJoinRound);
}
if(mClearFlag)
{
CGContextClearRect(c, self.bounds);
mClearFlag = NO;
}
CGContextStrokePath(c);
CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), self.bounds, mDrawingLayer);
NSLog(#"%.2fms : %f x %f", (CFAbsoluteTimeGetCurrent() - startTime)*1000.f, rect.size.width, rect.size.height);
}

I found a useful thread on on the Apple Dev Forums describing this exact problem. It only exists since iOS 5.0 and the theory is that it is because Apple introduced a double buffering system, so the first two drawRects will always be full. However, there is no explanation for why this will happen again after idle. The theory is that the underlying buffer is not guaranteed by the GPU, and this will be discarded at whim and need to be recreated. The solution (until Apple issues some kind of real solution) is to ping the buffer so that it won't be released:
mDisplayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(pingRect)];
[mDisplayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
- (void)pingRect
{
//Already drawing
if(mTouchCount > 0) return;
//Even touching just one pixel will keep the buffer alive
[self setNeedsDisplayInRect:CGRectMake(0, 0, 1, 1)];
}
The only weakness is if the user keeps their finger perfectly still for more than 5 seconds, but I think that is an acceptable risk.
EDIT Interesting update. It turns out simply calling setNeedsDisplay is enough to keep the buffer alive, even if it returns immediately. So I added this to my drawRect method:
- (void)drawRect:(CGRect)rect
{
if(rect.size.width == 1.f)
return;
//...
}
Hopefully, it will curb the power usage that this refresh method will surely increase.

Related

CALayer behaves differently when creating anew vs when applying CATransform3dIdentity

I am really stuck with some CALayer knowledge missing.
I have a layer which knows how to draw an arrow.
I apply some transform on every frame on it.
But it only works as expected if I always create this layer from scratch and apply the transform. And it doesn't work if I try reusing the same old layer (by applying a CATransform3dIdentity). Through "it doesn't work" I mean it flickers on the screen and the transform is not applied as needed as compared to the transform applied to the newly created layer.
My code looks as follows:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
transformModel = [OverlayTransformExtractor transformFromPixelBuffer:sampleBuffer];
if(transformModel)
{
//I tried also not removing it every time but just transforming.. the effect is the same
[firstLayer removeFromSuperlayer];
if(!firstLayer)
{
firstLayer = [VNArrowLayer layer];
firstLayer.frame = CGRectMake(videoView.frame.origin.x, videoView.frame.origin.y, videoView.frame.size.width, videoView.frame.size.height);
originalTransform = firstLayer.transform;
}
//resetting the previous transforms
firstLayer.transform = CATransform3DIdentity;
[self applyTransform:transformModel.transform];
dispatch_async(dispatch_get_main_queue(), ^(void)
{
[arrowView.layer addSublayer:firstLayer];
[CATransaction begin];
[CATransaction setValue:(id)kCFBooleanTrue
forKey:kCATransactionDisableActions];
[firstLayer setNeedsDisplay];
[CATransaction commit];
});
}
else
{
dispatch_async(dispatch_get_main_queue(), ^
{
[firstLayer removeFromSuperlayer];
[secondLayer removeFromSuperlayer];
});
}
}
Some suggestions of why is it behaving differently when it's just created? I checked the transform to be the same. I even tried to set the same position as a new layer (0,0), but the it is even more weird (it remains in the left top corner)
I also thought that maybe it's because of the implicit animation, so I tried turning it off -> no change.
Adding a new layer every time is not acceptable. The memory becomes full pretty fast, even though I manually try setting the firstLayer to nil.
So as stated above when I was applying a transform to existing layer (at every frame) it flickered. When creating it at every frame it was perfect, but memory intensive.
I could not get what makes it flicker, maybe some constraints or layout stuff about which I cannot find any information but..
I managed to make the approach (new layer at every frame) work. The idea was, I was removing the old layer and assigning nil to it not on the main thread, which cause the deallocations to happen late after and in this way to fill my memory quickly.
So the solution is to remove the old layer, to create the new one and to apply the transform on the main thread.
If smb can provide an answer that explains the flickering, I'll be glad to mark it as a correct one.

App processor usage increasing over time

I have been struggling with this problem for over one month trying to to figure out what is causing it with no solution. Since the code is pretty long i wouldn't be able to post it here.
Basically i have made drawing app. When you dubble tap the screen the screen and everything will reset, almost like I am reloading the view. When i reset the scene the processor usage will go down to around 9%, but then when i start drawing again the processor usage will go up to where I last ended. So say for example i draw and image and the processor power goes up to 50%, then dubble tap to reset the view to what is what from the beginning it will go down to 9%. Then when i start drawing again it will go up to 50%, and next time 60%,70% etc.
Maybe it is hard for you to see what is causing the problem due the lack of information so I could send my source code if someone is interested helping me by PMing me.
greentimer = [NSTimer scheduledTimerWithTimeInterval:0.02 target:self selector:#selector(movement2) userInfo:nil repeats:YES];
-(void)movement2{
static int intigrer;
intigrer = (intigrer+1)%3;
UIGraphicsBeginImageContext(CGSizeMake(320, 568));
[drawImage.image drawInRect:rekked];
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 8.0);
CGContextSetRGBStrokeColor(ctx, r12, g12, b12, 1);
CGContextBeginPath(ctx);
if (intigrer == 1 && integrer2 < greenran - greenran2) {
CGPathMoveToPoint(path, NULL, greentmporary.x, greentmporary.y);
CGPathAddLineToPoint(path, NULL, greenpoint1.x, greenpoint1.y);
}
green.center = greenpoint1;
if (integrer2 < greenran - greenran2) {
CGContextMoveToPoint(ctx, greentmporary.x, greentmporary.y);
CGContextAddLineToPoint(ctx, greenpoint1.x, greenpoint1.y);
}
CGContextStrokePath(ctx);
[drawImage setFrame:CGRectMake(0, 0, 320, 568)];
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// [self updatePoint2:YES];
static BOOL yes;
if (!yes) {
[self.view insertSubview:drawImage atIndex:0];
yes = YES;
}
ctx = nil;
}
You need to release your CGPaths and CGImages
CGPathRelease(path);
CGImageRelease(image);
CGContextRelease(context);
I think your problem is that you are making self.view more complex by each fire of the timer because you keep adding subviews to it. So the complexity of your scene increases each time its rendered until its completely reset. I could not really follow all your code, to be honest, as I am not familiar with what you are trying to achieve.
I think an approach to solving the problem is to run your program with Instruments with the Profile option (instead of doing a 'Run'). Select the 'Automation' template.
There is a way to issue logElementTree() on your running program and it gives a dump of the UIView hierarchy with images. There are lots of good articles on it, e.g.
http://cocoamanifest.net/articles/2011/05/uiautomation-an-introduction.html

How to simply add graphics content to current screen in iOS

This seems like such a basic thing to want, I can't believe I'm not able to find out how to do it. To make the description easy to understand, suppose I simply want to draw a bunch of random rectangles on the screen. These random rectangles would keep adding on top of each other repeatedly until something stopped the process. How would one do that?
The closest explanation I've seen is drawing applications, where the basic scheme is to draw into an image view, first copying the previous image into the new image and then adding the new content. Copying the original image sure seems like a waste of effort, and it sure seems like it should be possible to simply write the new content in place over whatever is there. Am I missing something obvious?
Note that drawRect replaces the entire frame. It works well for drawing a small set of objects, but it quickly becomes awkward when there's an indefinite amount of history that also needs to be displayed.
Edit: I'm attaching some sample images that are screen prints from a Mix C program that does what I'm after. Essentially, there are cellular automata that move around the screen leaving trails. The color of the trail depends upon the logic in the automaton as well as the color of the pixel where the automaton just traveled to. The automata should be able to move at rates of hundreds of pixels per second. Because of the logic used by the automata, I need to be able to not only write quickly to the image but also be able to inquire what the color of a pixel is (or mirrored data).
Typically you do this by either creating separate paths or layers for all your rectangles (if you want to keep track of them), or by drawing repeatedly into a CGBitmapContextRef, and then converting that into an image and drawing it in drawRect:. This is basically the same approach you're describing ("where the basic scheme is to draw into an image view…") except there's no need to copy the image. You just keep using the same context and making new images out of it.
The other tool you could use here is a CGLayer. The Core Graphics team discourages its use because of performance concerns, but it does make this kind of drawing much more convenient. When you look at the docs, and they say "benefit from improved performance," remember that this was written in 2006, and when I asked the Core Graphics team about it, they said that the faster answer today is CGBitmapContext. But you can't beat CGLayer for convenience on this kind of problem.
This should be fine by maintaining a CGBitmapContext that you continually write into (and that allows you to also read from it). When it changes, call setNeedsDisplayInRect:. In drawRect:, create the image, and draw it using CGContextDrawImage, passing the rect you were passed. (You may be passed the entire rect.)
It may be a little more flexible to do this on the CALayer instead of the UIView, but I doubt you'll see a great difference in performance. The view passes drawing to its layer.
The number of times a second this updates isn't really that important. drawRect: will not be called more often than the frame rate (max of 60 fps), no matter how often you call setNeedsDisplayInRect:. So you won't be creating images hundreds or thousands of times a second; just at the time that you need to draw something.
Are you seeing particular performance problems, or are you just concerned that you may in the future encounter performance problems? Do you have any sample code that shows the issue? I'm not saying it can't be slow (it might be if you're trying to do this full screen with retina). But you want to start with the simple solution and then optimize. Apple does a lot of graphics optimizations behind the scenes. Don't try to second guess them too much. They generate and draw images really well.
I've accepted another answer, but I'm including my own answer to show the alternative I used for testing.
-(void)viewWillLayoutSubviews {
self.pixelCount = 0;
self.seconds = 0;
CGRect frame = self.testImageView.bounds;
UIGraphicsBeginImageContextWithOptions(frame.size, YES, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 1.0);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextMoveToPoint(context, 0.0, 0.0);
CGContextAddRect(context, frame);
CGContextFillRect(context, frame);
CGContextStrokePath(context);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextStrokePath(context);
UIImage *blank = UIGraphicsGetImageFromCurrentImageContext();
self.context = context;
self.testImageView.image = blank;
// This timer has no delay, so it will draw squares as fast as possible.
[NSTimer scheduledTimerWithTimeInterval:0.0 target:self selector:#selector(drawRandomRectangle) userInfo:nil repeats:NO];
// This timer is used to control the frequency at which the imageViews image is updated.
[NSTimer scheduledTimerWithTimeInterval:1/20.f target:self selector:#selector(updateImage) userInfo:nil repeats:YES];
// This timer just outputs the counter once per second so I can record statistics.
[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:#selector(displayCounter) userInfo:nil repeats:YES];
}
-(void)updateImage
{
self.testImageView.image = UIGraphicsGetImageFromCurrentImageContext();
}
-(void)displayCounter
{
self.seconds++;
NSLog(#"%d",self.pixelCount/self.seconds);
}
-(void)drawRandomRectangle
{
int x1 = arc4random_uniform(self.view.bounds.size.width);
int y1 = arc4random_uniform(self.view.bounds.size.height);
int xdif = 20;
int ydif = 20;
x1 -= xdif/2;
y1 -= ydif/2;
CGFloat red = (arc4random() % 256) / 255.0f;
CGFloat green = (arc4random() % 256) / 255.0f;
CGFloat blue = (arc4random() % 256) / 255.0f;
UIColor *randomColor = [UIColor colorWithRed:red green:green blue:blue alpha:1.0f];
CGRect frame = CGRectMake(x1*1.0f, y1*1.0f, xdif*1.0f, ydif*1.0f);
CGContextSetStrokeColorWithColor(self.context, [UIColor blackColor].CGColor);
CGContextSetFillColorWithColor(self.context, randomColor.CGColor);
CGContextSetLineWidth(self.context, 1.0);
CGContextAddRect(self.context, frame);
CGContextStrokePath(self.context);
CGContextFillRect(self.context, frame);
CGContextStrokePath(self.context);
if (self.pixelCount < 100000) {
[NSTimer scheduledTimerWithTimeInterval:0.0 target:self selector:#selector(drawRandomRectangle) userInfo:nil repeats:NO];
}
self.pixelCount ++;
}
The graph shows image updates per second on the x-axis and number of 20x20 squares drawn to the context per second in the y-axis.

Drawing circles in custom UIView

I have a custom UIView with drawRect:
-(void)drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawControlPointsWithCoordsX:point.x andY:point.y forRect:CGRectMake(0, 0, 320, 600) andContext:context];
NSLog(#"Values: %f and %f", point.x, point.y);
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
[self startTimer];
});
}
[self startTimer]; is the initiation method. In this method I start a timer which produces some pairs of ordinates. It is called only once. The coordinates come in this form:
Values: 276.711670 and 117.279999
and keep changing all the time. All the values are logged. In the method which is repeated by the timer every 0.1s I call:
[self drawRect:CGRectMake(0, 0, 320, 600)];
The log works, and the coordinates keep changing, and are correct, but only two or three dots are plotted (instead of more or less 400) and they are in the wrong place.
This is the code to draw the points:
- (void)drawControlPointsWithCoordsX:(int)x andY:(int)y forRect:(CGRect)rect andContext:(CGContextRef)contextRef
{
UIGraphicsPushContext(contextRef);
CGContextSetLineWidth(contextRef, 2.0);
CGContextSetRGBFillColor(contextRef, 0, 0, 1.0, 1.0);
CGContextSetRGBStrokeColor(contextRef, 0, 0, 1.0, 1.0);
CGRect circlePoint = (CGRectMake(x,y, 5.0, 5.0));
CGContextFillEllipseInRect(contextRef, circlePoint);
}
First, you should not be manually calling drawRect:, because it gets called automatically when the view is told to be redrawn. So if you choose to redraw it manually like this, the behavior is probably undefined because what the current graphics context is might be different or not valid. The following is taken from Apple's documentation on the drawRect: method:
This method is called when a view is first displayed or when an event occurs that invalidates a visible part of the view. You should never call this method directly yourself. To invalidate part of your view, and thus cause that portion to be redrawn, call the setNeedsDisplay or setNeedsDisplayInRect: method instead.
Also you should probably not be starting a timer running from within drawRect:. You should probably be starting your timer from somewhere else, like somewhere in your code where you first display this view. Also each time you redraw the view, it is clearing out what was previously there, not overlaying it. So if you expect this to build on what was previously drawn, you must keep track of every point that you draw and redraw all of them up to the current point. You could accomplish this in other ways that wouldn't involve redrawing what you've already drawn, but it would be more complicated.

How to cache CGContextRef

Unsatisfied with my previous results, I have been asked to create a freehand drawing view that will not blur when zoomed. The only way I can imagine this is possible is to use a CATiledLayer because otherwise it is just too slow when drawing a line when zoomed. Currently, I have it set up so that it will redraw every line every time, but I want to know if I can cache the results of the previous lines (not as pixels because they need to scale well) in a context or something.
I thought about CGBitmapContext, but would that mean I would need to tear down and set up a new context after every zoom? The problem is that on a retina display, the line drawing is too slow (on iPad 2 it is so-so), especially when drawing while zoomed. There is an app in the App Store called GoodNotes which beautifully demonstrates that this is possible, and it is possible to do it smoothly, but I can't understand how they are doing it. Here is my code so far (result of most of today):
- (void)drawRect:(CGRect)rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(c, mLineWidth);
CGContextSetAllowsAntialiasing(c, true);
CGContextSetShouldAntialias(c, true);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineJoin(c, kCGLineJoinRound);
//Protect the local variables against the multithreaded nature of CATiledLayer
[mLock lock];
NSArray *pathsCopy = [mStrokes copy];
for(UIBezierPath *path in pathsCopy) //**Would like to cache these**
{
CGContextAddPath(c, path.CGPath);
CGContextStrokePath(c);
}
if(mCurPath)
{
CGContextAddPath(c, mCurPath.CGPath);
CGContextStrokePath(c);
}
CGRect pathBounds = mCurPath.bounds;
if(pathBounds.size.width > 32 || pathBounds.size.height > 32)
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
}
Profiling shows the hottest function by far is GCSFillDRAM8by1
First, as the path stroking is the most expensive operation, you shouldn't lock around it as this prevent you from drawing tiles concurrently on different cores.
Secondly, I think you could avoid to call CGContextStrokePath several times by adding all the paths in the context and stroking them altogether.
[mLock lock];
for ( UIBezierPath *path in mStrokes ) {
CGContextAddPath(c, path.CGPath);
}
if ( mCurPath ) {
CGContextAddPath(c, mCurPath.CGPath);
}
CGRect pathBounds = mCurPath.bounds;
if ( pathBounds.size.width > 32 || pathBounds.size.height > 32 )
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
CGContextStrokePath(c);
The CGContextRef is just a canvas in which the drawing operations occur. You cannot cache it but you may the create a CGImageRef with a flattened bitmap image of your paths and reuse that image. This won't help with zooming (as you'd need to recreate the image when the level of detail changes) but can be useful to improve the performance when the user is drawing a really long path.
There is a really interesting WWDC 2012 Session Video on that subject: Optimizing 2D Graphics and Animation Performance.
The bottleneck was actually the way I was using CATiledLayer. I guess it is too much to update with freehand info. I set it up with levels of detail as I saw in the docs and tutorials online, but in the end I didn't need that much. I just hooked up the scrollview delegate, cleared the contents when it was done zooming and changed the contentScale of the layer to match the scroll view. The result was beautiful (it disappears and fades back in, but that can't be helped).

Resources