Core Graphics image blending doesn't release memory - ios

I am blending two images with the following code:
+ (UIImage *)xxx_blendedImageWithFirstImage:(UIImage *)image
secondImage:(UIImage *)secondImage
renderedInFrame:(CGRect)frame
alpha:(CGFloat)alpha {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, UIScreen.mainScreen.scale);
[image drawInRect:frame];
[secondImage drawInRect:frame blendMode:kCGBlendModeNormal alpha:alpha];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This code is called in an IBAction for a UISlider, so that it is called on every slider position change. This is the memory footprint for this code:
It takes 230 MB and then fails due to memory pressure.
How to make this code working properly?

Perform this action on background thread, if you have connected this IBAction with Value Changed event in xib. Make sure do all UI changes on main thread.
Otherwise connect this IBAction with Touch Up Inside event. After this method will call, when you leave the slider for one time in one drag. (not for all dragging values)
Hope it will solve your problem

After adding #autoreleasepool:
+ (UIImage *)xxx_blendedImageWithFirstImage:(UIImage *)image
secondImage:(UIImage *)secondImage
renderedInFrame:(CGRect)frame
alpha:(CGFloat)alpha {
UIImage *newImage = nil;
#autoreleasepool {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, UIScreen.mainScreen.scale);
[image drawInRect:frame];
[secondImage drawInRect:frame blendMode:kCGBlendModeNormal alpha:alpha];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newImage;
}
Maybe, there should be some manual release, however now the memory footprint looks like:

Related

iOS: can I take screenshot on global queue?

I wonder how can I take a screenshot in a global queue? right now I'm doing it in the main queue and it works, if I do it in the global queue, things freeze. I'm using this screenshot code: iOS: what's the fastest, most performant way to make a screenshot programmatically?
I also tried the following code to take the snapshot of self.view, but it also doesn't work:
+(UIImage *)snapshot_of_view:(UIView*)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
any ideas?
Any UI operation MUST be performed in the main thread.
You are calling user interface code. UI code must be on the main thread, since that thread has the main run loop and stuff.
You need to do it on the main thread:
+(UIImage *)snapshot_of_view:(UIView*)view {
dispatch_async(dispatch_get_main_queue(), ^{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
});
}

Fastest way to take screenShot of UIView

I've searched a lot but only found two methods to take screen shot of UIView.
first renderInContext:
I've used it in a way
CGContextRef context = [self createBitmapContextOfSize:CGSizeMake(nImageWidth, nImageHeight)];
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, nImageHeight);
CGContextConcatCTM(context, flipVertical);
[self.layer setBackgroundColor:[UIColor clearColor].CGColor];
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
Second drawViewHierarchyInRect: which I've used as
UIImage *background = nil;
UIGraphicsBeginImageContextWithOptions (self.bounds.size, NO, self.window.screen.scale);
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)])
{
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
}
background = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I know that the second one is faster than first and it work for me for iPhone because the view has low size. but when I capturing from iPad the video become jerky.
Can Any body tell me faster way of taking screen shot.
any help would be highly appreciated
Regarding performance, the Apple Docs state the following:
In addition to -drawViewHierarchyInRect:afterScreenUpdates:, UIView
now provides another two snapshot related methods,
-snapshotViewAfterScreenUpdates: and -resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets:. UIScreen also has -snapshotViewAfterScreenUpdates:.
Unlike UIView's -drawViewHierarchyInRect:afterScreenUpdates:, these
methods return a UIView object. If you are looking for a new snapshot
view, use one of
these methods. It will be more efficient than calling
-drawViewHierarchyInRect:afterScreenUpdates: to render the view contents into a bitmap image yourself. You can use the returned view
as a visual stand-in for the current view/screen in your app. For
example, you might use a snapshot view for animations where updating a
large view hierarchy might be expensive.
There is a third method for taking a snapshot that is much much quicker than either of these but it returns a UIView.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
If you are just using the snapshot to place as a background "image" etc... then I'd use this instead.
However, this is only available for iOS8.
To use it just do...
UIView *snapshotView = [someView snapshotViewAfterScreenUpdates:YES];
This Method will return you A snapshot images of particular view
-(UIImage *)createSnapShotImagesFromUIview
{
UIGraphicsBeginImageContext(CGSizeMake(view.frame.size.width,view.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[mapView.layer renderInContext:context];
UIImage *img_screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img_screenShot;
}

Save scene to camera roll

I'm working on augmented reality app for iPhone and I'm using sample code "ImageTargets" from Vuforia SDK. I'm using my own images as templates and my own model to augment the scene (just a few vertices in OpenGL). Next thing I wanna do is to save the scene to camera roll after pushing a button. I created the button as well as the method the button responds to. Here comes the tricky part. When I press the button the method gets called, image is properly saved, but the image is completely white showing only the button icon (like this http://tinypic.com/r/16c2kjq/5).
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.layer.frame.size);
[self.view.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
- (void)image: (UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo: (void *) contextInfo {
NSLog(#"Image Saved");
}
I have these 2 methods in ImageTargetsParentViewController class but I also tried saving the view from ARParentViewController (and even moved the methods to the class). Has anyone found solution to this? I'm not so sure which view to save and/or whether there aren't any tricky parts with saving the view that contains OpeglES. Thanks for any reply.
try to use this code for save photo:
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imagee = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect;
rect = CGRectMake(0, 0, 320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([imagee CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageWriteToSavedPhotosAlbum(img, Nil, Nil, Nil);

UIGraphicsGetImageFromCurrentImageContext in specific CGRect

I want to take screen shot of the specific location in specific size. I found this. But it takes whole screen. Where can i set the CGRect.
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I was stuck on this a few days ago actually... Then after a while I managed to come up with a solution! I've implemented it in a category:
#import "UIView+RenderSubframe.h"
#import <QuartzCore/QuartzCore.h>
#implementation UIView (RenderSubframe)
- (UIImage *) renderWithSubframe:(CGRect)frame {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, 0.0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextConcatCTM(c, CGAffineTransformMakeTranslation(-frame.origin.x, -frame.origin.y));
[self.layer renderInContext:c];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
#end
Voila!
If I'm not mistaken, this method doesn't actually render the unneeded part of the view at all, making this much more efficient than cropping the image afterwards.
In your case, you want to apply this category to a UIWindow class rather than a UIView.

Make a UIImage from a MKMapView

I want to create a UIImage from a MKMapView. My map is correctly displayed in the view, however the UIImage produced is just a gray image. Here's the relevant snippet.
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Anyone know how to make a UIImage using MapKit?
I am using the same code that is tested with ios sdk 4.1 and works fine. So, when map is already displayed to user and user press the button this action will be called:
UIImage *image = [mapView renderToImage];
and here is the wrapper function realized as UIView extension:
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So, the problem is not in that code part.
On iOS7, there is a new API on MapKit for this called MKMapSnapshotter. So you actually don't need to create a mapview, load the tiles and create a graphic context capturing yourself.
Take a look into it at https://developer.apple.com/library/ios/documentation/MapKit/Reference/MKMapSnapshotter_class/Reference/Reference.html
Here is the improved function for retina display:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
// IMPORTANT: using weak link on UIKit
if(UIGraphicsBeginImageContextWithOptions != NULL)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hey Loren. There are multiple layers in the mapView. I think the first one is the map and the second on is the google layer. They might have changed something in the mapkit after 3.1. You can try
[[[mapView.layer sublayers] objectAtIndex:1] renderInContext:UIGraphicsGetCurrentContext()];
You can also try
CGRect rect = [mapView bounds];
CGImageRef mapImage = [mapView createSnapshotWithRect:rect];
Hope this helps.
Note that, mapView may not finish load so image may be grey.
as
mapViewDidFinishLoadingMap:
will not always be called, you should get UIImage in
mapViewDidFinishRenderingMap:fullyRendered:
so, the code just like this
- (UIImage *)renderToImage:(MKMapView *)mapView
{
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you are calling this immediately after initializing the map (maybe in viewDidLoad?), you could get a gray image as the map is not finished drawing yet.
Try:
Calling the capture code using performSelector:withObject:afterDelay: using a short delay (even 0 seconds might work so it fires right after the current method is finished)
If you are adding annotations, call the capture code in the didAddAnnotationViews delegate method
Edit:
On the simulator, using performSelector, a zero delay works. On the device, a longer delay is required (about 5 seconds).
However, if you add annotations (and you capture in the didAddAnnotationViews method), it works immediately on both the simulator and device.

Resources