App crashes due to memory Warning on iPad 3 - ios

I am drawing lines on UIImage and my image is in UIImageView. First time drawing process goes fine and I assign the new Image to UIImageView but when I repeat the process it gives me memory warning and app crashes:
Terminating in response to backboard's termination.
I have profiled my app and the CG raster data was taking 273 MB and overall its 341 MB Live Bytes. Also wrapped code in in autorelease pool but didn't get the success. My Code
UIGraphicsBeginImageContext(imageView.image.size);
[imageView.image drawAtPoint:CGPointMake(0, 0)];
context2=UIGraphicsGetCurrentContext();
for(int i=0; i<kmtaObject.iTotalSize; i++)
{
kmtaGroup=&kmtaObject.KMTA_GROUP_OBJ[i];
//NSLog(#"Group # = %d",i);
for (int j=0; j<kmtaGroup->TotalLines; j++)
{
lineObject=&kmtaGroup->Line_INFO_OBJ[j];
// NSLog(#"Line # = %d",j);
// NSLog(#"*****************");
x0 = lineObject->x0;
y0= lineObject->y0;
x1= lineObject->x1;
y1= lineObject->y1;
color= lineObject->Color;
lineWidth= lineObject->LinkWidth;
lineColor=[self add_colorWithRGBAHexValue:color];
linearColor=lineColor;
// Brush width
CGContextSetLineWidth(context2, lineWidth);
// Line Color
CGContextSetStrokeColorWithColor(context2,[linearColor CGColor]);
CGContextMoveToPoint(context2, x0, y0);
CGContextAddLineToPoint(context2, x1, y1);
CGContextStrokePath(context2);
}
}
newImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView.image=newImage;

So it just happened I stumbled onto a similar issue. I was assigning an image to the image view where the image itself was previously retained by other objects, being processed and such... The result was that the image view did indeed leak everyone of those images somehow.
The solution I used was to create a copy of the image on the level of the CGImage before assigning it to the image view. I guess there must be an issue with the bitmaps or something. Anyway try creating a copy like this:
CGImageRef cgCopy = CGImageCreateCopy(originalImage.CGImage);
UIImage *copiedImage = [[UIImage alloc] initWithCGImage:cgCopy scale:originalImage.scale orientation:originalImage.imageOrientation];
CGImageRelease(cgCopy);
imageView.image = copiedImage;

Max resolution used in iOS device is 1024x1024. we can't use more than that size. use 2x and 3x images for respective device sizes.

Related

iOS Redrawing image to prevent deferred decompression resulting in a bigger image

I've noticed some people redraw images on a CGContext to prevent deferred decompression and this has caused a bug in our app.
The bug is that the size of the image professes to remain the same but the CGImageDataProvider data has extra bytes appended to it.
For example, we have a 797x500 PNG image downloaded from the Internet, and the AsyncImageViewredraws and returns the redrawn image.
Here is the code:
UIImage *image = [[UIImage alloc] initWithData:data];
if (image)
{
// Log to compare size and data length...
NSLog(#"BEFORE: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Original code from AsyncImageView
//redraw to prevent deferred decompression
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Log to compare size and data length...
NSLog(#"AFTER: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Some other code...
}
The log shows as follows:
BEFORE: 797.000000 500.000000
LEN 1594000
AFTER: 797.000000 500.000000
LEN 1600000
I decided to print each byte one by one, and sure enough there were twelve 0s appended for each row.
Basically, the redrawing was causing the image data to be that of a 800x500 image. Because of this our app was looking at the wrong pixel when it wanted to look at the 797 * row + columnth pixel.
We're not using any big images so deferred decompression doesn't pose any problems, but should I decide to use this method to redraw images, there's a chance I might introduce a subtle bug.
Does anyone have a solution to this? Or is this a bug introduced by Apple and we can't really do anything?
As you've discovered, rows are padded out to a convenient size. This is generally to make vector algorithms more efficient. You just need to adapt to that layout if you're going to use CGImage this way. You need to call CGImageGetBytesPerRow to find out the actual number of bytes allocated, and then adjust your offsets based on that (bytesPerRow * row + column).
That's probably best for you, but if you need to get rid of the padding, you can do that by creating your own CGBitmapContext and render into it. That's a heavily covered topic around Stack Overflow if you're not familiar with it. For example: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?

the memory of using drawInRect to resize picture

recently i use PHAssets replace the old Assets in my project.However,when i use my app to scale some pictures ,i found it usually crashes.
i use the debug mode, found it is the memory problem.
i use the code below to resize picture
+(UIImage*)scaleRetangleToFitLen:(UIImage *)img sWidth:(float)wid sHeight:(float)hei{
CGSize sb = img.size;;
if (img.size.height/img.size.width > hei/wid) {
sb = CGSizeMake(wid,wid*img.size.height/img.size.width);
}else{
sb = CGSizeMake(img.size.width*hei/img.size.height,hei);
}
if (sb.width > img.size.width || sb.height > img.size.height) {
sb = img.size;
}
UIImage* scaledImage = nil;
UIGraphicsBeginImageContext(sb);
[img drawInRect:CGRectMake(0,0, sb.width, sb.height)];
scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
img = nil;
return scaledImage;
}
the memory will increase about 50M when the code
[img drawInRect:CGRectMake(0,0, sb.width, sb.height)]
runs and it will not be setted free even thought the method is finished.
the width and the height is 304*228 ,the image original size is about 3264*2448,the returned image is 304*228;it means the real image i wanted at last is just a 304*228 size image,however it takes 50+M memory..
Is there any way to free the memory the drawInRect: function takes?
(the #autoreleasepool does not work ~ 😢 😢)
When loading an image, iOS usually doesn't decompress it until it really needs to. So the image you pass into your function is most likely a JPEG or PNG image that iOS keeps in memory in it's compressed state. The moment you draw it, it will be decompressed first and therefore the memory increases significantly. I would expect an increase by 3264 x 2448 x 4 = 35MB (and not 50MB).
To get rid of the memory again, you will make sure you release all reference to the image you pass into your function. So the problem is outside the code you show in your question.
For a more specific answer, you'll need to show all the code that works with the original image.

Receiving Memory Warning in iPad after drawing lines on large size image with CGContextRef

I am drawing lines on UIImage by using CGContextRef.My images are very large in resolution like 3000x4500.It doesn't give me memory warning if I draw a single line but if I would draw more than one line then it gives memory warning and after that my app crashes.Tried to release the CGContextRef object but got an error. My code :
UIGraphicsBeginImageContext(imageView.image.size);
[imageView.image drawAtPoint:CGPointMake(0, 0)];
context2=UIGraphicsGetCurrentContext();
for(int i=0; i<kmtaObject.iTotalSize; i++)
{
kmtaGroup=&kmtaObject.KMTA_GROUP_OBJ[i];
//NSLog(#"Group # = %d",i);
for (int j=0; j<kmtaGroup->TotalLines; j++)
{
lineObject=&kmtaGroup->Line_INFO_OBJ[j];
// NSLog(#"Line # = %d",j);
// NSLog(#"*****************");
x0 = lineObject->x0;
y0= lineObject->y0;
x1= lineObject->x1;
y1= lineObject->y1;
color= lineObject->Color;
lineWidth= lineObject->LinkWidth;
lineColor=[self add_colorWithRGBAHexValue:color];
linearColor=lineColor;
// Brush width
CGContextSetLineWidth(context2, lineWidth);
// Line Color
CGContextSetStrokeColorWithColor(context2,[linearColor CGColor]);
CGContextMoveToPoint(context2, x0, y0);
CGContextAddLineToPoint(context2, x1, y1);
CGContextStrokePath(context2);
}
}
newImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView.image=newImage;
Just the image will require 54MB of memory (3000 * 4500 * 4).
Since only a portion can be displayed at a time consider dividing the image into several sections like map tiles as used in Apple Maps.

Memory increases when merging multiple high resolution images into single image, iOS

I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.

Memory leak with a lot of big images in iPad

I'm trying to store UIImage datas in NSArray, actually 60 images, each with size of 300kb. Then, I'm trying to animate that images in UIImageView.
My code:
NSMutableArray *arr = [[NSMutableArray alloc] init];
for (int i=0; i<61; ++i) {
[arr addObject:[UIImage imageNamed:[NSString stringWithFormat:#"%i.png", i]]];
}
img.animationImages = arr;
img.animationDuration = 1.5;
img.contentMode = UIViewContentModeBottomLeft;
[img startAnimating];
When I test it in iPad Simulator 4.3, it works fine.
When I want to test it on my device (iPad 1), application crashes.
Note: App does not crashes if I comment this code: [img startAnimating];
1. What could be the problem? I think it is memory problem...?!
2. Can I store a loooots of UIImages in NSArray?
Your png files may have 300KB, but this is a compressing format.
To get an idea of the size of the Image itself you should multiply the width, height and bytes per pixel.
i.e. If an image has the size 1024 * 1024 and RGBA model, the images itself has an size of 4MB in the memory. And this is just one image. If you have 300 it is about 120MB.
Note: this is only a rough rule of thumb, but it gives you an idea.
So you should keep path-names in the array and load the image only if need, and thumbnails should be resized and stored on the disk as files. Do not just scale the UIImageView.
Here's a great article with code about resizing.
I solved the app crash problem with alternative way: I tried to use simple NSTimer. For each iteration I set n'th image to UIImageView:
I have UIImageView in xib file, pointing to img variable.
NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:.03
target:self
selector:#selector(tiktak)
userInfo:nil repeats:YES];
int n = 0;
-(void)tiktak {
n++;
img.image = [UIImage imageNamed:
[NSString stringWithFormat:#"Blue Menu_%i.png", n]];
if(nn == 60) nn = 0;
}
Animation is amazing. I've also tested it in Activity Monitor, CPU usage is 6% and less.
I would not advise you store that many UIImages in an NSArray, its way too heavy and inefficient. From where are these images from (bundle, internet, device's gallery)?
Edit: I don't know how you want it to look, but you can build the array with the names of the UIImages, and then load the UIImage and show it. Like this:
for(i=0;i<60;i++){
NSString *nameOfPicture=[NSString stringWithFormat:#"picture_%d",i+1]
[array addObject:nameOfPicture];
}

Resources