Need to release a CGImage contained in a CALayer's contents property - ios

I have a CALayer object called sublayer. I use it as self.sublayer throughout my view controller because I have made it a property in my view controller's header file.
I set the sublayer's contents property equal to a UIImage object that is created using a CGImageRef object called imageRef:
self.subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;
I then release the imageRef object right away now that it has been used to create the sublayer contents and it is no longer needed:
CGImageRelease(imageRef);
However, here is what is bothering me. Later on in the code I will no longer need self.sublayer.contents and I want to make sure I release the CGImage it contains properly.
If I NSLog self.sublayer.contents it will print this to the console: <CGImage 0x146537c0>
So I need to be able to release this CGImage as well.
I tried using this to release the CGImage, but the NSLog still prints the same to the console:
CGImageRelease((__bridge CGImageRef)(self.subLayer.contents));
If I use this, the NSLog will print to the console as (null), but I am worried that this is technically not releasing the CGImage:
self.subLayer.contents = nil;
Does setting the sublayer's contents property to nil properly release the CGImage, or am I correct in thinking that it is not technically releasing the CGImage?
I am experiencing memory problems right now in my app so I need to make sure that I am releasing this CGImage properly.

The contents property on CALayer is a retaining property, meaning that it's setter implementation more or less does this:
- (void)setContents:(id)contents
{
if (contents == _contents) return; // Same as existing value
[_contents release];
_contents = [contents retain];
}
So, when you set nil as the new contents, the old contents is released.

Related

Testing that a particular image was copied to pasteboard

I'm writing a test to verify a feature which copies an image to the pasteboard.
Here's the test, as I would prefer to write it:
// reset the paste board
UIPasteboard.generalPasteboard.image = nil; //<-- this explodes
XCTAssertNil(UIPasteboard.generalPasteboard.image);
// Grab some random existing image
UIImage *image = [UIImage imageNamed:#"some-image"];
MJKMyClass *myInstance = [[myInstance alloc] initWithImage:image];
[myInstance doSomethingThatCopiesImageToPasteboard]
XCTAssertNotNil(UIPasteboard.generalPasteboard.image);
This fails with:
failed: caught "NSInvalidArgumentException", "-[UIPasteboard setImage:]: Argument is not an object of type UIImage [(null)]"
Which is surprising, because, according to the UIPasteboard header, image is a nullable field.
#interface UIPasteboard(UIPasteboardDataExtensions)
<...>
#property(nullable,nonatomic,copy) UIImage *image __TVOS_PROHIBITED;
<...>
#end
I presume this means they are doing a runtime check on the argument, even though it is nullable.
Things I have tried:
comparing objects by id doesn't work because UIImage's are copied by the generalPastboard.image (every time you call UIPasteboard.generalPasteboard.image you can a different instance)
comparing by PNG representation might work, but seems gnarly.
comparing by image size has been my closest bet so far, but also seems roundabout.
You can clear the pasteboard without having to pass nil by using the items property:
UIPasteboard.generalPasteboard.items = #[];
Or in Swift:
UIPasteboard.generalPasteboard().items = []
For comparing UIImages, you might want to look at some other questions:
How to compare two UIImage objects
https://stackoverflow.com/search?q=uiimage+compare

How to optimize memory usage in UIImage

I try to take screenshot of uiwebview and send it with observer to another UIImageView in another class.
I using this method to take screenshot:
-(UIImage*)takeScreenshoot{
#autoreleasepool {
UIGraphicsBeginImageContext(CGSizeMake(self.view.frame.size.width,self.view.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[self.webPage.layer renderInContext:context];
UIImage *__weak screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
}
But then I have problem. Everytime I take screenshot with this method, memory rate grows about 10-15mb and it's not realising it. And if I take screenshot in every webviewdidfinishload, you can imagine how much it can take memory!
How can I fix that issue?
If possible try to use [UIScreen snapshotViewAfterScreenUpdates] which returns UIView .
This is the snapshot of currently displayed content (snapshot of app).
Even apple also says " this method is faster than trying to render the contents of the screen into a bitmap image yourself."
According to your code, you are passing this bitmap image to just display in some other UIImageView. so i think using UIScreen method is appropriate here.
To display UIWebView part only->
Create another UIView instance. Set its frame to the frame of your webView.
Now add this screenShot view as subView to createdView and set its frame such that webView portion will be displayed.
Try calling CGContextRelease(context); after you have got your screen shot.
Or as #Greg said, remove the line and use UIGraphicsGetCurrentContext() directly

iOS7's drawViewHierarchyInRect doesn't work?

From what I've read, iOS7's new drawViewHierarchyInRect is supposed to be faster than CALayer's renderInContext. And according to this and this, it should be a simple matter of calling:
[myView drawViewHierarchyInRect:myView.frame afterScreenUpdates:YES];
instead of
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
However, when I try this, I just get blank images. Full code that does the capture, where "self" is a subclass of UIView,
// YES = opaque. Ignores alpha channel, so less memory is used.
// This method for some reasons renders the
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, self.window.screen.scale); // Still slow.
if ( [AIMAppDelegate isOniOS7OrNewer] )
[self drawViewHierarchyInRect:self.frame afterScreenUpdates:YES]; // Doesn't work!
else
[self.layer renderInContext:UIGraphicsGetCurrentContext()]; // Works!
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
contentImageView.image = image; // this is empty if done using iOS7's way
and contentImageView is a UIImageView that is added as a subView to self during initialization.
Additionally, the drawing that I want captured in the image is contained in other sub-views that are also added to self as a sub-view during initialization (including contentImageView).
Any ideas why this is failing when using drawViewHierarchyInRect?
* Update *
I get an image if I draw a specific sub-view, such as:
[contentImageView drawViewHierarchyInRect:contentImageView.frame afterScreenUpdates:YES];
or
[self.curvesView drawViewHierarchyInRect:self.curvesView.frame afterScreenUpdates:YES];
however I need all the visible sub-views combined into one image.
Try it with self.bounds rather than self.frame—it’s possible you’re getting an image of your view rendered outside the boundaries of the image context you’ve created.

Rules for managing CGImageRef memory?

What are the rules for managing memory for CGImageRefs with ARC? That is, can someone help me to the right documentation?
I am getting images from the photo library and creating a UIImage to display:
CGImageRef newImage = [assetRep fullResolutionImage];
...
UIImage *cloudImage = [UIImage imageWithCGImage:newImage scale:scale orientation:orientation];
Do I need to do CGImageRelease(newImage)?
I'm getting memory warnings but it doesn't seem to be a gradual buildup of objects I haven't released and I'm not seeing any leaks with Instruments. Puzzled I am.
No, you do not need to call CGImageRelease() on the CGImageRef returned by ALAssetRepresentation's convenience methods like fullResolutionImage or fullScreenImage. Unfortunately, at the current time, the documentation and header files for these methods does not make that clear.
If you create a CGImageRef yourself by using one of the CGImageCreate*() functions, then you own it and are responsible for releasing that image ref using CGImageRelease(). In contrast, the CGImageRefs returned by fullResolutionImage and fullScreenImage appear to be "autoreleased" in the sense that you do not own the image ref returned by those methods. For example, say you try something like this in your code:
CGImageRef newImage = [assetRep fullResolutionImage];
...
UIImage *cloudImage = [UIImage imageWithCGImage:newImage
scale:scale orientation:orientation];
CGImageRelease(newImage);
If you run the static analyzer, it will issue the following warning for the CGImageRelease(newImage); line:
Incorrect decrement of the reference count of an object that is not
owned at this point by the caller
Note that you will get this warning regardless of whether your project is set to use Manual Reference Counting or ARC.
In contrast, the documentation for the CGImage method of NSBitmapImageRep, for example, makes the fact that the CGImageRef returned is autoreleased more clear:
CGImage
Returns a Core Graphics image object from the receiver’s
current bitmap data.
- (CGImageRef)CGImage
Return Value
Returns an autoreleased CGImageRef opaque type based on the receiver’s
current bitmap data.

invalid context error when drawing a UIImage

I am trying to draw a UIImage to the context of my UIView. I've used this code with the context stuff commented in and out ...
- (void)drawRect:(CGRect)rect
{
//UIGraphicsBeginImageContextWithOptions(rect.size,YES,[[UIScreen mainScreen] scale]);
//UIGraphicsBeginImageContext(rect.size);
UIImage *newImage = [UIImage imageNamed:#"character_1_0001.png"];
//[newImage drawAtPoint:CGPointMake(200, 200)];
[newImage drawInRect:rect];
//UIGraphicsEndImageContext();
}
As I understand it I shouldn't need to set the image context to do this and indeed without it I do see the image drawn in the view but I also get this error in the log ...
<Error>: CGContextSaveGState: invalid context 0x0
If I uncomment the UIGraphicsBeginImageContext lines I don't get anything drawn and no error.
Do I need to use the default graphics context and declare this somehow?
I need to draw the image because I want to add to it on the context and can't just generate a load of UIImageView objects.
It sounds like you are calling drawRect: directly. So it is getting called once from your call, and once from the genuine drawing loop.
With the context creation present, you do all the drawing in a new context then discard it, so you never see anything
With the context creation missing, you get the error from the manually called drawRect:, and the actual drawing takes place in the genuine drawRect: so you see your image.
Don't call drawRect: directly. Call setNeedsDisplay and drawRect: will be called for you at the appropriate point.
I discovered what was happening. The ViewController host of my custom UIView was instantiating a CALayer object. This was some old code that was previously being used as the drawing layer for a bitmap from an AVCaptureSession that I am using to generate my image.
I'm now not using the CALayer and am instead just drawing a slice of each frame of video onto the default graphics context in my view.
For some reason the instantiation of this CALayer was causing the conflict in the context.
Now it is not being used I only need to use the following code to draw the image ...
- (void)drawRect:(CGRect)rect
{
UIImage *newImage = [UIImage imageNamed:#"character_1_0001.png"];
[newImage drawInRect:rect];
}
Now I just do the same thing with an image that I pass in as a CGImageRef and adjust the rect ... hopefully.

Resources