Sudden memory spike with CGContextFillRect - ios

I see a sharp increase in memory usage (from 39 MB to 186 MB on iPad) with "CGContextFillRect" statement execution in my below code. Is there something wrong here.
My application eventually crashes.
PS: Surprisingly the memory spike is seen on 3rd and 4th gen iPads and not on 2nd Gen iPad.
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
[self setBackgroundColor:[UIColor clearColor]];
}
return self;
}
- (id)initWithFrame:(CGRect)iFrame andHollowCircles:(NSArray *)iCircles {
self = [super initWithFrame:iFrame];
if (self) {
[self setBackgroundColor:[UIColor clearColor]];
self.circleViews = iCircles;
}
return self;
}
- (void)drawHollowPoint:(CGPoint)iHollowPoint withRadius:(NSNumber *)iRadius {
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(currentContext, self.circleRadius.floatValue);
[[UIColor whiteColor] setFill];
CGContextAddArc(currentContext, iHollowPoint.x, iHollowPoint.y, iRadius.floatValue, 0, M_PI * 2, YES);
CGContextFillPath(currentContext);
}
- (void)drawRect:(CGRect)rect {
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSaveGState(currentContext);
CGRect aRect = [self.superview bounds];
[[UIColor whiteColor]setFill];
CGContextFillRect(currentContext, aRect);
CGContextSaveGState(currentContext);
[[UIColor blackColor]setFill];
CGContextFillRect(currentContext, aRect);
CGContextRestoreGState(currentContext);
for (MyCircleView *circleView in self.circleViews) {
[self drawHollowPoint:circleView.center withRadius:circleView.circleRadius];
}
CGContextTranslateCTM(currentContext, 0, self.bounds.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextSaveGState(currentContext);
}

This code doesn't quite make sense; I assume you've removed parts of it? You create a blank alpha mask and then throw it away.
If the above code is really what you're doing, you don't really need to draw anything. You could just create a 12MB memory area and fill it with repeating 1 0 0 0 (opaque black in ARGB) and then create an image off of that. But I assume you're actually doing more than that.
Likely you have this view configured with contentScaleFactor set to match the scale from UIScreen, and this view is very large. 3rd and 4th gen iPads have a Retina display, so the scale is 2, and the memory required to draw a view is 4x as large.
That said, you should only expect about 12MB to hold a full screen image (2048*1536*4). The fact that you're seeing 10x that suggests something more is going on, but I suspect that it's still related to perhaps drawing too many copies.
If possible, you can step the scale down to 1 to make retina and non-retina behave the same.
EDIT:
Your edited code is very different from your original code. There's no attempt to make an image in this code. I've tested it out as best I can, and I don't see any surprising memory spike. But there are several oddities:
You're not correctly balancing CGContextSaveGState with CGContextRestoreGState. That actually might cause a memory problem.
Why are you drawing the rect all in white and then all in black?
Your rect is [self.superview bounds]. That's in the wrong coordinate space. You should almost certainly mean [self bounds].
Why do you flip the context right before returning from drawRect and then save the context? This doesn't make sense at all.
I would assume your drawRect: would look like this:
- (void)drawRect:(CGRect)rect {
[[UIColor blackColor] setFill];
UIRectFill(rect); // You're only responsible for drawing the area given in `rect`
for (CircleView *circleView in self.circleViews) {
[self drawHollowPoint:circleView.center withRadius:circleView.circleRadius];
}
}

Related

App processor usage increasing over time

I have been struggling with this problem for over one month trying to to figure out what is causing it with no solution. Since the code is pretty long i wouldn't be able to post it here.
Basically i have made drawing app. When you dubble tap the screen the screen and everything will reset, almost like I am reloading the view. When i reset the scene the processor usage will go down to around 9%, but then when i start drawing again the processor usage will go up to where I last ended. So say for example i draw and image and the processor power goes up to 50%, then dubble tap to reset the view to what is what from the beginning it will go down to 9%. Then when i start drawing again it will go up to 50%, and next time 60%,70% etc.
Maybe it is hard for you to see what is causing the problem due the lack of information so I could send my source code if someone is interested helping me by PMing me.
greentimer = [NSTimer scheduledTimerWithTimeInterval:0.02 target:self selector:#selector(movement2) userInfo:nil repeats:YES];
-(void)movement2{
static int intigrer;
intigrer = (intigrer+1)%3;
UIGraphicsBeginImageContext(CGSizeMake(320, 568));
[drawImage.image drawInRect:rekked];
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 8.0);
CGContextSetRGBStrokeColor(ctx, r12, g12, b12, 1);
CGContextBeginPath(ctx);
if (intigrer == 1 && integrer2 < greenran - greenran2) {
CGPathMoveToPoint(path, NULL, greentmporary.x, greentmporary.y);
CGPathAddLineToPoint(path, NULL, greenpoint1.x, greenpoint1.y);
}
green.center = greenpoint1;
if (integrer2 < greenran - greenran2) {
CGContextMoveToPoint(ctx, greentmporary.x, greentmporary.y);
CGContextAddLineToPoint(ctx, greenpoint1.x, greenpoint1.y);
}
CGContextStrokePath(ctx);
[drawImage setFrame:CGRectMake(0, 0, 320, 568)];
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// [self updatePoint2:YES];
static BOOL yes;
if (!yes) {
[self.view insertSubview:drawImage atIndex:0];
yes = YES;
}
ctx = nil;
}
You need to release your CGPaths and CGImages
CGPathRelease(path);
CGImageRelease(image);
CGContextRelease(context);
I think your problem is that you are making self.view more complex by each fire of the timer because you keep adding subviews to it. So the complexity of your scene increases each time its rendered until its completely reset. I could not really follow all your code, to be honest, as I am not familiar with what you are trying to achieve.
I think an approach to solving the problem is to run your program with Instruments with the Profile option (instead of doing a 'Run'). Select the 'Automation' template.
There is a way to issue logElementTree() on your running program and it gives a dump of the UIView hierarchy with images. There are lots of good articles on it, e.g.
http://cocoamanifest.net/articles/2011/05/uiautomation-an-introduction.html

ARC does not release CALayer

I have a code block which aims to capture snapshot of pdf based custom views for each page. To accomplish it I'll create view controller in a loop and then iterate. The problem is even view controller released custom view doesn't released and look like live on Instruments tool. As a result for loop iterates a lot so it breaks the memory (up to 500MB for 42 living) and crashes.
Here is the iteration code;
do
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
kMFMarkupKeyPageNextPageID)) isMemberOfClass:[NSNull class]]);
_generatingSnapshots = NO;
And here the captureSnapshot method;
- (UIImage *)captureSnapshot
{
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return capturedImage;
}
Instruments;
Edit for further details:
Below code is from CUIPDFView a subclass of UIView;
- (void)drawRect:(CGRect)rect
{
[self drawInContext:UIGraphicsGetCurrentContext()];
}
-(void)drawInContext:(CGContextRef)context
{
CGRect drawRect = CGRectMake(self.bounds.origin.x, self.bounds.origin.y,self.bounds.size.width, self.bounds.size.height);
CGContextSetRGBFillColor(context, 1.0000, 1.0000, 1.0000, 1.0f);
CGContextFillRect(context, drawRect);
// PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system
// before we start drawing.
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Grab the first PDF page
CGPDFPageRef page = CGPDFDocumentGetPage(_pdfDocument, _pageNumberToUse);
// We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing
CGContextSaveGState(context);
// CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any
// base rotations necessary to display the PDF page correctly.
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true);
// And apply the transform.
CGContextConcatCTM(context, pdfTransform);
// Finally, we draw the page and restore the graphics state for further manipulations!
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
}
When I delete drawRect method implementation, memory allocation problem dismiss but obviously it can't print the pdf.
Try putting an #autoreleasepool inside your loop:
do
{
#autoreleasepool
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
This will flush the autorelease pool each time through the loop, releasing all the autoreleased objects.
I would be surprised if this is actually a problem with ARC. An object that is still alive still has a strong reference to the layer.
What does [AOAssetManager saveImage:...] do with the image. Are you sure it is not holding onto it?
Is _pageWithID: doing something that is keeping a pointer to CUIPageViewController around?

Performance Issues When Using Many CALayer Masks

I am trying to use CAShapeLayer to mask a CALayer in my iOS app as it takes a fraction of the CPU time to mask an image vs manually masking one in a bitmap context;
When I have several dozen or more images layered over each other, the CAShapeLayer masked UIImageView is slow to move to my touch.
Here is some example code:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"SomeImage.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:image];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), image.size.width * .25, image.size.height * .25);
CAShapeLayer *shape = [CAShapeLayer layer];
shape.path = path;
imageView.layer.mask = shape;
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
With the above code, imageView is very laggy. However, it reacts instantly if I mask it manually in a bitmap context:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"3.0-Pad-Classic0.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width * .25, image.size.height * .25), NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextAddPath(ctx, path);
CGContextClip(ctx);
[image drawInRect:CGRectMake(-(image.size.width * .25), -(image.size.height * .25), image.size.width, image.size.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:finalImage];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), finalImage.size.width, finalImage.size.height);
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
By the way, here is the code to SLTUIImageView, it's just a simple subclass of UIImageView that responds to touches (for anyone who was wondering):
-(id)initWithImage:(UIImage *)image{
self = [super initWithImage:image];
if (self) {
self.userInteractionEnabled = YES;
}
return self;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
[self.superview bringSubviewToFront:self];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch = [touches anyObject];
self.center = [touch locationInView:self.superview];
}
Is it possible to somehow optimize how the CAShapeLayer is masking the UIImageView so that the performance is improved? I have tried to find out where the bottle-neck is using the Time Profiler in Instruments, but I can't tell exactly what is causing it.
I have tried setting shouldRasterize to YES on both layer and on layer.mask but neither seem to have any effect. I'm not sure what to do.
Edit:
I have done more testing and find that if I use just a regular CALayer to mask another CALayer (layer.mask = someOtherLayer) I have the same performance issues. It seems that the problem isn't specific to CAShapeLayer—rather it is specific to the mask property of CALayer.
Edit 2:
So after learning more about using the Core Animation tool in Instruments, I learned that the view is being rendered offscreen each time it moves. Setting shouldRaster to YES when the touch begins and turning it off when the touch ends makes the view stay green (thus keeping the cache) in instruments, but performance is still terrible. I believe this is because even though the view is being cached, if it isn't opaque, than it still has to be re-rendered with each frame.
One thing to emphasize is that if there are only a few views being masked (say even around ten) the performance is pretty good. However, when you increase that to 100 or more, the performance lags. I imagine this is because when one moves over the others, they all have to be re-rendered.
My conclusion is this, I have one of two options.
First, there must be someway to permanently mask a view (render it once and call it good). I know this can be done via the graphic or bitmap context route as I show in my example code, but when a layer masks its view, it happens instantly. When I do it in a bitmap context as shown, it is quite slow (as in it almost can't even be compared how much slower it is).
Second, there must be some faster way to do it via the bitmap context route. If there is an expert in masking images or views, their help would be very much appreciated.
You've gotten pretty far along and I believe are almost to a solution. What I would do is simply an extension of what you've already tried. Since you say many of these layers are "ending up" in final positions that remain constant relative to the other layers, and the mask.. So simply render all those "finished" layers to a single bitmap context. That way, every time you write out a layer to that single context, you'll have one less layer to worry about that is slowing down the animation/rendering process.
Quartz (drawRect:) is slower than CoreAnimation for many reasons: CALayer vs CGContext drawRect vs CALayer. But it is necessary to use it correctly.
In the documentation you can see some advices. ImprovingCoreAnimationPerformance
If you want a hight performance, maybe you can try using AsyncDisplayKit. This framework allows to create smooth and responsive apps.

drawInRect method works too slow

Now i have following:
- (void)drawRect
{
// some drawing
[bgImage drawinRect:self.bounds];
// some drawing
}
I have more than 40 views with text and some marks inside. I need to repaint all these views on user tapping - it should be really fast!
I instrumented my code and saw: 75% of all execution time is [MyView drawRect:] and 95% of my drawRect time is [bgImage drawinRect:self.bounds] call. I need to draw background within GPU nor CPU. How is it possible?
What i have tried:
Using subviews instead of drawRect. In my case it is very slow because of unkickable color blending.
Adding UIImageView as background don't helps, we can't draw on subviews ...
Adding image layer to CALayer? Is it possible?
UPDATE:
Now i am trying to use CALayer instead of drawInRect.
- (void)drawRect
{
// some drawing
CALayer * layer = [CALayer layer];
layer.frame = self.bounds;
layer.contents = (id)image.CGImage;
layer.contentsScale = [UIScreen mainScreen].scale;
layer.contentsCenter = CGRectMake(capInsects.left / image.size.width,
capInsects.top / image.size.height,
(image.size.width - capInsects.left - capInsects.right) / image.size.width,
(image.size.height - capInsects.top - capInsects.bottom) / image.size.height);
// some drawing
}
Nothing instead of this new layer is visible right now. All my painting is under this layer i think...
I would use UILabels; you can reduce the cost of blending by giving the labels a background color or setting shouldRasterize = YES on the layer of the view holding those labels. (You'll probably also want to set rasterizationScale to be the same as [[UIScreen mainScreen] scale]).
Another suggestion:
There are lots of open-source calendar components that have probably had to try and solve the same issues. Perhaps you could look and see how they solved them.

Instruments alleges Memory leak with [UIColor CGColor]

I am just in the process of tracking down some memory leaks with Instruments. It claims I am leaking in the middle of a drawRect method. Here is the code:
- (void)drawRect:(CGRect)rect {
if (highColor && lowColor) {
// Set the colors for the gradient to the two colors specified for high and low
// The next line is allegedly leaking
[gradientLayer setColors:[NSArray arrayWithObjects:(id)[highColor CGColor], (id)[lowColor CGColor], nil]];
gradientLayer.startPoint = CGPointMake(0.5, 0.2);
}
[super drawRect:rect];
}
I am on the iPad, so I have to manage the memory myself (that is, no garbage collection). Can anyone see what's wrong here? My understanding is that I don't have to release the array nor should I have to release the CGColors. Also, is there any way in Instruments to find out what object type is leaking, ie. is it referring to the NSArray or the CGColors?
Any help would be much appreciated. Thank you.
PS: I got the code for the GradientView from somewhere some months ago; it works very well (other than exposing the aforementioned memory leak). You can find the code here.
EDIT:
I have done a bit more research and refactored my code as follows:
- (void)drawRect:(CGRect)rect {
if (highColor && lowColor) {
// The following two lines are leaking
CGColorRef highCGColor = [highColor CGColor];
CGColorRef lowCGColor = [lowColor CGColor];
// Set the colors for the gradient to the two colors specified for high and low
[gradientLayer setColors:[NSArray arrayWithObjects:(id)highCGColor, (id)lowCGColor, nil]];
gradientLayer.startPoint = CGPointMake(0.5, 0.2);
CGColorRelease(highCGColor);
CGColorRelease(lowCGColor);
}
[super drawRect:rect];
}
However, I can't work out why the two CGColors are still leaking. I am releasing them at the end of the method. Is it possible that the NSArray does not release them properly when it is deallocated? Still puzzled...
I´m not an expert with memory managment on iphone/ipad, but the way I would do it:
try with an NSAutoreleasepool in your if, the pool then handles the memory managment, so you have to:
if...
NSAutoreleasePool *mypool=[[NSAutoreleasePool alloc]init];
...
[mypool drain];
Try it out and see if there is still a leakage, but remember not to delete your 2 rows where you release your colors.
How are highColor and lowColor declared within your UIView? Are they being appropriately released / autoreleased? How about the declaration of your gradientLayer object, is that being released appropriately? Here is some code that I used at one stage to create a gradient on a UIView. Instead of using layers it draws directly into the current context. Perhaps it might help:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGGradientRef myGradient;
CGColorSpaceRef myColorSpace;
size_t num_locations = 3;
CGFloat locations[3] = {0.0, 0.5, 1.0};
CGFloat components[12] = {0.855, 0.749, 0.196, 1.0,
0.973, 0.933, 0.267, 1.0,
0.855, 0.749, 0.196, 1.0};
myColorSpace = CGColorSpaceCreateDeviceRGB();
myGradient = CGGradientCreateWithColorComponents(myColorSpace, components, locations, num_locations);
CGPoint myStartPoint, myEndPoint;
myEndPoint.x = self.bounds.size.width;
CGContextDrawLinearGradient(context, myGradient, myStartPoint, myEndPoint, 0);
CGColorSpaceRelease(myColorSpace);
CGGradientRelease(myGradient);
}
You should not be releasing highCGColor and lowCGColor.

Resources