On iOS, is doing CGContextRelease required? - ios

Because if Instruments is run and Activities Monitor is chosen, the app running on iPhone 4S is using 4.88MB if the contexts are released, and is also 4.88MB if the contexts are not released. So does that mean releasing context is optional? (I thought it is required actually). The contexts were referenced by CGContextRef variables. (ARC is being used).
The contexts were CGBitmapContext, created for the Retina display, so at about 640 x 640, and there are 4 such contexts, which are all created in viewDidAppear, and I thought if 1 pixel is 4 bytes, then each context will be 1.6MB already. Could it be that after the viewDidAppear is done, the contexts were automatically released? Basically, I generated CGImage objects from those bitmap contexts and set the CGImage objects to be pointed to by the CALayer objects (using layer.contents = (__bridge id) cgImage;), so the bitmap contexts were no longer required. It is compiled using Xcode 4.3, with ARC, and targeted towards iOS 4.3. (but I thought CGContextRef is not part of ARC).
update: correction: it should be "generate CGImage objects from those CGBitmapContext, and set those CGImage objects to CALayer" (the original question is edited to reflect that).

Releasing a CGContextRef is not optional, but you should be aware of whether or not you need to release it. It follows the standard manual memory management rules. If you take ownership (alloc, create, retain, and a few others) you must release ownership (release). If you release when you don't have ownership, it's an overrelease.
What you're probably seeing is that even after you've released the object, someone else is retaining it. It's probably something in your view hierarchy. An object not releasing after you release your ownership is typically not a problem.

Related

CALayer seems to retain its contents - how to release resources? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Can’t release unused CALayer memory when using multiple layers
I have an app which allows the user to browse a series of images that are CALayers. I found that after I added more than 20 to the screen the iPad 2 would crash - obviously I need to dynamically load them in when visible on the screen.
So I implemented this, by removing the CALayer from its SuperLayer when no longer needed. What I found however is that the memory does not disappear when viewed in the 'Activity Monitor". It does however get freed when I 'simulate a memory warning' in the simulator. Fine you might think - that's what memory warnings are for. However - I still find the iPad runs out of RAM as I browse the images, the memory usages goes up and up until it crashes. Does anyone know a way to force a CALayer to release its resources?
Here's my code, note if I omit assigning the layer contents then the memory usage remains low (but of course you don't see the image)
UIImage* image = [UIImage imageNamed:#"imageName"];
frontLayer = [CALayer layer];
frontLayer.bounds = CGRectMake(0, 0, 952, 650);
frontLayer.contents = (id) image.CGImage;
Note I am not using ARC, and after I remove the layer from its superlayer I release it (it is retained by a property). The fact that the memory seems to get reclaimed with a low memory warning makes me think it's not a problem with the way I am retaining/releasing but I am open to ideas.
I discovered this is actually a duplicate of this post
[UIImage imageNamed:#""]; uses caching, when I use [UIImage imageWithContentsOfFile:#""] the problem goes away.
Can't release unused CALayer memory when using multiple layers

Is it possible to persist the contents of a CALayer between drawInContext calls?

Are there any built in abilities to maintain the contents of a CALayer between drawLayer:inContext: calls? Right now I am copying the layer to a buffer and redrawing an image from the buffer every time I called back in drawLayer:inContext: but I'm wondering if CALayer has a way to do this automatically...
I don't believe so. The 'drawInContext' will clear the underlying buffer so that you can draw to it. However, if you forego the drawInContext or drawRect methods, you can set your layer.contents to a CGImage and that will be retained.
I personally do this for almost all of my routines. I overwrite - (void) setFrame:(CGRect)frame to check to see if the frame size has changed. If it has changed I redraw the image using my normal drawing routines but into the context: UIGraphicsBeginImageContextWithOptions(size, _opaque, 0);. I can then grab that image and set it to the imageCache: cachedImage = UIGraphicsGetImageFromCurrentImageContext();. Then I set the layer.Contents to the CGImage. I use this to help cache my drawings, especially on the new iPad which is slow on many drawing routines the iPad 2 doesn't even blink on.
Other advantages to this method: You can share cached images between views if you set up a separate, shared cache. This can really help your memory footprint if you manage your cache well. (Tip: I use NSStringFromCGSize as a dictionary key for shared images). Also, you can actually spin off your drawing routines on a different thread, and then set your layer contents when it's done. This prevents your drawing routines from blocking the main thread (the current image may be stretched in this case though until the new image is set).

Is it certain that UIGraphicsBeginImageContext use CGBitmapContextCreate to create the graphics context?

Does UIGraphicsBeginImageContext use CGBitmapContextCreate to create a graphics context [update: I can't find it in the documentation], so the graphics context is exactly the same either way? I also tried to step into UIGraphicsBeginImageContext but the debugger won't let me step into it.
In the UIKit Function References page of iOS documentation, the following is written about UIGraphicsBeginImageContext:
Creates a bitmap-based graphics context and makes it the current context.
Emphasis added. Following a link to the CGContextRef page, I find this:
A graphics context contains drawing parameters and all device-specific information needed to render the paint on a page to the destination, whether the destination is a window in an application, a bitmap image, a PDF document, or a printer.
Again, emphasis added. This says that (as of now) there are 4 kinds of Core Graphics contexts, each with their own initializers. The only kind that has to do with bitmaps is a bitmap-based CGContextRef, and there is only one documented way to create them (well, technically it comes in two versions). It is very likely that this function is being used. I believe that UIGraphicsBeginImageContext is merely a convenience method. It just sets up a default set of parameters for CGBitmapContextCreate (it takes a lot of parameters) and pushes the created context onto the graphics stack.

iOS and multiple OpenGL views

I'm currently developping an iPad app which is using OpenGL to draw some very simple (no more than 1000 or 2000 vertices) rotating models in multiple OpenGL views.
There are currently 6 view in a grid, each one running its own display link to update the drawing. Due to the simplicity of the models, it's by far the simplest method to do it, I don't have the time to code a full OpenGL interface.
Currently, it's doing well performance-wise, but there are some annoying glitches. The first 3 OpenGL views display without problems, and the last 3 only display a few triangles (while still retaining the ability to rotate the model). Also there are some cases where the glDrawArrays call is going straight into EXC_BAD_ACCESS (especially on the simulator), which tell me there is something wrong with the buffers.
What I checked (as well as double- and triple-checked) is :
Buffer allocation seems OK
All resources are freed on dealloc
The instruments show some warnings, but nothing that seems related
I'm thinking it's probably related to my having multiple views drawing at the same time, so is there any known thing I should have done there? Each view has its own context, but perhaps I'm doing something wrong with that...
Also, I just noticed that in the simulator, the afflicted views are flickering between the right drawing with all the vertices and the wrong drawing with only a few.
Anyway, if you have any ideas, thanks for sharing!
Okay, I'm going to answer my own question since I finally found what was going on. It was a small missing line that was causing all those problems.
Basically, to have multiple OpenGL views displayed at the same time, you need :
Either, the same context for every view. Here, you have to take care not to draw with multiple threads at the same time (i.e. lock the context somehow, as explained on this answer. And you have to re-bind the frame- and render-buffers every time on each frame.
Or, you can use different contexts for each view. Then, you have to re-set the context on each frame, because other display links, could (and would, as in my case) cause your OpenGL calls to use the wrong data. Also, there is no need for re-binding frame- and render-buffers since your context is preserved.
Also, call glFlush() after each frame, to tell the GPU to finish rendering each frame fully.
In my case (the second one), the code for rendering each frame (in iOS) looks like :
- (void) drawFrame:(CADisplayLink*)displayLink {
// Set current context, assuming _context
// is the class ivar for the OpenGL Context
[EAGLContext setCurrentContext:_context]
// Clear whatever you want
glClear (GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// Do matrix stuff
...
glUniformMatrix4fv (...);
// Set your viewport
glViewport (0, 0, self.frame.size.width, self.frame.size.height);
// Bind object buffers
glBindBuffer (GL_ARRAY_BUFFER, _vertexBuffer);
glVertexAttribPointer (_glVertexPositionSlot, 3, ...);
// Draw elements
glDrawArrays (GL_TRIANGLES, 0, _currentVertexCount);
// Discard unneeded depth buffer
const GLenum discard[] = {GL_DEPTH_ATTACHMENT};
glDiscardFramebufferEXT (GL_FRAMEBUFFER, 1, discard);
// Present render buffer
[_context presentRenderbuffer:GL_RENDERBUFFER];
// Unbind and flush
glBindBuffer (GL_ARRAY_BUFFER, 0);
glFlush();
}
EDIT
I'm going to edit this answer, since I found out that running multiple CADisplayLinks could cause some issues. You have to make sure to set the frameInterval property of your CADisplayLink instance to something other than 0 or 1. Else, the run loop will only have time to call the first render method, and then it'll call it again, and again. In my case, that was why only one object was moving. Now, it's set to 3 or 4 frames, and the run loop has time to call all the render methods.
This applies only to the application running on the device. The simulator, being very fast, doesn't care about such things.
It gets tricky when you want multiple UIViews that are openGLViews,
on this site you should be able to read all about it: Using multiple openGL Views and uikit

CGContextDrawImage is EXTREMELY slow after large UIImage drawn into it

It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:#"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
Exactly as typewriter said, you have to use the same pixel format. Using one with AlphaNone gives a speed boost. But even more important, argb32_image call in my case did numerous calls converting pixels from ARGB to BGRA. So the best bitmapInfo value for me was (at the time; there is a probability that Apple can change something here in the future):
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
CGContextDrawImage may work faster if rectangle argument was made integral (by CGRectIntegral). Seems to have more effect when image is scaled by factor close to 1.
Using layers actually slowed down things for me. Probably something was changed since 2011 in some internal calls.
Setting interpolation quality for the context lower than default (by CGContextSetInterpolationQuality) is important. I would recommend using (IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow). Macros IS_RETINA_DISPLAY is taken from here.
Make sure you get CGColorSpaceRef from CGColorSpaceCreateDeviceRGB() or the like when creating context. Some performance issues were reported for getting fixed color space instead of requesting that of the device.
Inheriting view class from UIImageView and simply setting self.image to the image created from context proved useful to me. However, read about using UIImageView first if you want to do this, for it requires some changes in code logic (because drawRect: isn't called anymore).
And if you can avoid scaling your image at the time of actual drawing, try to do so. Drawing non-scaled image is significantly faster - unfortunately, for me that was not an option.

Resources