UIGraphics drawInRect: crashes on 4S device - ios

Basically, i do:
- (UIImage *)addPreviewImageToImage:(UIImage *)backImage
{
UIImage *newImage;
NSString *file = [[NSBundle mainBundle] pathForResource:#"sampleImage" ofType:#"png"];
UIImage *frontImage = [UIImage imageWithContentsOfFile:file];
float scale = 1.0f;
CGRect rectBack = CGRectMake(0, 0, scale * backImage.size.width, scale * backImage.size.height);
CGRect rectFront = [self rectForFrontElementOfSize:frontImage.size overElementOfSize:rectBack.size mediaType:kMediaTypeImage];
// Begin context
UIGraphicsBeginImageContextWithOptions(rectBack.size, YES, 0); // opaque = YES
// draw images
#autoreleasepool { // no effect
[backImage drawInRect:rectBack]; // where the crash occurs, backImage is about 4 Mo
}
[frontImage drawInRect:rectFront];
// grab context
newImage = UIGraphicsGetImageFromCurrentImageContext();
// end context
UIGraphicsEndImageContext();
return newImage;
}
This works fine on iPhone 5 or more (iOS7) but crashes 80% of the time on 4S on drawInRect: call.
How can I optimize this code ? Would writing the backImage on the disk and then get it via contentsOfFile be more efficient ? Any ideas ?
EDIT (crash details):
After receiving memory warnings, Xcode terminates the app stating :
"Terminated due to Memory Pressure"
By using breakpoints I know this occurs when executing drawInRect:backImage (which is a picture from the camera)
rectBack description before crash:
(CGRect) rectBack = origin=(x=0, y=0) size=(width=2448, height=3264)
Note:
Having set opaque argument as YES in UIGraphicsBeginImageContextWithOptions and reducing the scale argument seems to make it work, but I am loosing quality and I'd like to know if there's something wrong in the logic

Related

UIGraphicsBeginImageContextWithOptions very masive memory disaster

I have this line of code to take screenshot of uiview:
- (UIImage *)imageWithView:(UIView *)view {
UIImage *viewImage = nil;
float height;
UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation];
if(orientation == UIDeviceOrientationLandscapeLeft || orientation == UIDeviceOrientationLandscapeRight){
height = view.frame.size.height;
}else{
height = 730;
}
UIGraphicsBeginImageContextWithOptions(CGSizeMake(view.frame.size.width, height), YES, 0.0);
[view drawViewHierarchyInRect:CGRectMake(0, -60, view.bounds.size.width, height) afterScreenUpdates:YES];
viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
... in this uiview is UIWebView.
This - (UIImage *)imageWithView:(UIView *)view method I call in everytime uiwebview finish load. And there's is my problem: When I open some small webpages like google it method takes some kb of memory, but when I open something biger like bbc, cnn, yahoo or stackoverflow, it can take up to 80mb memory usage.
There is snapshoot of instrument when cnn.com was opened.
After few seconds it realeses, but I don't want that big memory usage, because you might imagine, how useless becomes uiwebview in these seconds.
So, what is your suggestions, how to take screenshoot of uiwebview, without so big memory usage, I don't even need this uiimage in good quality, because I put it in 120*80 uiimageview in one of the screen corners.
The last parameter in UIGraphicsBeginImageContextWithOptions is scale. If you know you only need a low resolution rendering of the web view, try setting a value for scale less than 1.0.
UIGraphicsBeginImageContextWithOptions(CGSizeMake(view.frame.size.width, height), YES, 0.2);

Getting black (empty) image from UIView drawViewHierarchyInRect:afterScreenUpdates:

After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks

Memory increases when merging multiple high resolution images into single image, iOS

I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.

Screen shot not provide image of whole screen

I am making application related to images. I have multiple images on my screen. I had take screen shot of that. But it should not provide my whole screen.
Little part of the top most & bottom most part need not be shown in that.
I have navigation bar on top. And some buttons at bottom. I don't want to capture that buttons and navigation bar in my screenshot image.
Below is my code for screen shot.
-(UIImage *) screenshot
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, YES, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
After taking screenshot I am using it by below code in facebook share method,
UIImage *image12 =[self screenshot];
[mySLComposerSheet addImage:image12];
the easiest way to achieve this would be to add a UIView which holds all the content you want to take a screenshot of and then call drawViewHierarchyInRect from that UIView instead of the main UIView.
Something like this:
-(UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(contentView.bounds.size, YES, [UIScreen mainScreen].scale);
[contentView drawViewHierarchyInRect:contentView.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hope this helps!
You can use my below code to take screen shot of a view.
I have put the condition to check the size of a screenshot.
With this code image is saved in your documents folder and from there you can use your image to share on Facebook or anywhere you want to share.
CGSize size = self.view.bounds.size;
CGRect cropRect;
if ([self isPad])
{
cropRect = CGRectMake(110 , 70 , 300 , 300);
}
else
{
if (IS_IPHONE_5)
{
cropRect = CGRectMake(55 , 25 , 173 , 152);
}
else
{
cropRect = CGRectMake(30 , 25 , 164 , 141);
}
}
/* Get the entire on screen map as Image */
UIGraphicsBeginImageContext(size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Crop the desired region */
CGImageRef imageRef = CGImageCreateWithImageInRect(mapImage.CGImage, cropRect);
UIImage * cropImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
/* Save the cropped image
UIImageWriteToSavedPhotosAlbum(cropImage, nil, nil, nil);*/
//save to document folder
NSData * imageData = UIImageJPEGRepresentation(cropImage, 1.0);
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString *imagename=[NSString stringWithFormat:#"Pic.jpg"];
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imagename];
////NSLog(#"full path %#",fullPathToFile);
[imageData writeToFile:fullPathToFile atomically:NO];
Hope it helps you.
use this code
-(IBAction)captureScreen:(id)sender
{
UIGraphicsBeginImageContext(webview.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}
sample project www.cocoalibrary.blogspot.com
http://www.bobmccune.com/2011/09/08/screen-capture-in-ios-apps/
snapshotViewAfterScreenUpdates
but it is only Available in iOS 7.0 and later.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This method captures the current visual contents of the screen from the render server and uses them to build a new snapshot view. You can use the returned snapshot view as a visual stand-in for the screen’s contents in your app. For example, you might use a snapshot view to facilitate a full screen animation. Because the content is captured from the already rendered content, this method reflects the current visual appearance of the screen and is not updated to reflect animations that are scheduled or in progress. However, this method is faster than trying to render the contents of the screen into a bitmap image yourself.
https://developer.apple.com/library/ios/documentation/uikit/reference/UIScreen_Class/Reference/UIScreen.html#//apple_ref/occ/instm/UIScreen/snapshotViewAfterScreenUpdates:

How do I create a UIImage bigger than the device screen?

I'm working on an iPhone app that can create pictures and post them to Facebook and Instagram.
The correct size for Facebook photos seems to be 350x350, and indeed this code creates a 350x350 image exactly how I want:
-(UIImage *)createImage {
UIImageView *v = [[UIImageView alloc] initWithFrame:CGRectMake(0, screenHeight/2-349, 349, 349)];
v.image = [UIImage imageNamed:#"backgroundForFacebook.png"]; //"backgroundForFacebook.png" is 349x349.
//This code adds some text to the image.
CGSize dimensions = CGSizeMake(screenWidth, screenHeight);
CGSize imageSize = [self.ghhaiku.text sizeWithFont:[UIFont fontWithName:#"Georgia"
size:mediumFontSize]
constrainedToSize:dimensions lineBreakMode:0];
int textHeight = imageSize.height+16;
UITextView *tv = [self createTextViewForDisplay:self.ghhaiku.text];
tv.frame = CGRectMake((screenWidth/2)-(self.textWidth/2),s creenHeight/3.5,
self.textWidth/2 + screenWidth/2, textHeight*2);
[v addSubview:tv];
//End of text-adding code
CGRect newRect = CGRectMake(0, screenHeight/2-349, 349, 349);
UIGraphicsBeginImageContext(newRect.size);
[[v layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[v removeFromSuperview];
return myImage;
}
But when I use the same code to create an Instagram image, which needs to be 612x612, I get the text only, no background image:
-(UIImage *)createImageForInstagram {
UIImageView *v = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 612, 612)];
v.image = [UIImage imageNamed:#"backgroundForInstagram.png"]; //"backgroundForInstagram.png" is 612x612.
//...text-adding code...
CGRect newRect = CGRectMake(0, 0, 612, 612);
UIGraphicsBeginImageContext(newRect.size);
[[v layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[v removeFromSuperview];
return myImage;
}
What am I doing wrong, and how do I fix it?
(While I'm at it, I'll also say that I'm very new to using graphic contexts, so if there's any awkwardness in the code I'd appreciate your pointing it out.)
EDIT: Now I've reduced the two methods to one, and this time I don't even get the text. Argh!
-(UIImage *)addTextToImage:(UIImage *)myImage withFontSize:(int)sz {
NSString *string=self.displayHaikuTextView.text;
NSString *myWatermarkText = [string stringByAppendingString:#"\n\n\t--haiku.com"];
NSDictionary *attrs = [NSDictionary dictionaryWithObjectsAndKeys:[UIFont fontWithName:#"Georgia"
size:sz],
NSFontAttributeName,
nil];
NSAttributedString *attString = [[NSAttributedString alloc] initWithString:myWatermarkText attributes:attrs];
UIGraphicsBeginImageContextWithOptions(myImage.size,NO,1.0);
[myImage drawAtPoint: CGPointZero];
NSString *longestLine = ghv.listOfLines[1];
CGSize sizeOfLongestLine = [longestLine sizeWithFont:[UIFont fontWithName:#"Georgia" size:sz]];
CGSize siz = CGSizeMake(sizeOfLongestLine.width, sizeOfLongestLine.height*5);
[attString drawAtPoint: CGPointMake(myImage.size.width/2 - siz.width/2, myImage.size.height/2-siz.height/2)];
myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return myImage;
}
When I pass the arguments [UIImage imageNamed:"backgroundForFacebook.png"] (an image 349x349) and 12, everything is fine. I get the picture. When I pass the arguments [UIImage imageNamed:"backgroundForInstagram.png"] (an image 612x612) and 24, nothing doing.
Right now I'm just putting the text on the smaller image (#"backgroundForFacebook.png") and then resizing it, but that makes the text blurry, which I don't like.
EDIT: Just to cover the basics, here are images of 1) the method in which I call this method (to check the spelling) and 2) the Supporting Files and the Build Phases (to show the image is actually there). I also tried assigning longestLine a non-variable NSString. No luck. :(
FURTHER EDIT: Okay, logging the size and scale of the images as I go during addTextToImage: above, here's what I get for the smaller image, the one that's working:
2013-02-04 22:24:09.588 GayHaikuTabbed[38144:c07] 349.000000, 349.000000, 1.000000
And here's what I get for the larger image--it's a doozy.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextGetFontRenderingStyle: invalid context 0x0
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextSetFillColorWithColor: invalid context 0x0
//About thirty more of these.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGBitmapContextCreate: unsupported parameter combination: 0 integer bits/component; 0 bits/pixel; 0-component color space; kCGImageAlphaNoneSkipLast; 2448 bytes/row.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextDrawImage: invalid context 0x0
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGBitmapContextCreateImage: invalid context 0x0
Step through the code. After you create myImage, go into the console and look at myImage.size and myImage.scale. Multiply the size numbers by the scale.
If your background image is Retina-quality, your image is actually 1224 x 1224.
From the UIImage docs:
You should avoid creating UIImage objects that are greater than 1024 x
1024 in size. Besides the large amount of memory such an image would
consume, you may run into problems when using the image as a texture
in OpenGL ES or when drawing the image to a view or layer. This size
restriction does not apply if you are performing code-based
manipulations, such as resizing an image larger than 1024 x 1024
pixels by drawing it to a bitmap-backed graphics context. In fact, you
may need to resize an image in this manner (or break it into several
smaller images) in order to draw it to one of your views.
If your image is actually 612 pixels (not points) but your code is rendering it as 1224 pixels, you can just change the scale property to 1.0.
If your image is actually 1224 pixels, you'll need to do something else, like
put your code on a bitmap-backed graphics context (i.e., calling UIGraphicsBeginImageContext around the offending code)
displaying a smaller version to the user
However, if your image is for Instagram, it should not be 1224 x 1224 :-)
Update: I noticed your app is haiku-related, so here is the answer in haiku format:
Big UIImage?
Bitmap-backed graphics context
Or shrink to 612
i always back up to the obvious questions:
is your image actually properly called/spelled backgroundForInstagram.png?
have you properly added it to your project?
when added, did it copied to the device in the copy steps of the build phases?
what's in the ghv at the time of the call in the edited code?
what's in item[1] of ghv.lines at the time of that rendering?
these are the things i would look at in terms of debugging this code.
Try This
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 10.0);// 10.0 means 10 time bigger
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Resources