recently i use PHAssets replace the old Assets in my project.However,when i use my app to scale some pictures ,i found it usually crashes.
i use the debug mode, found it is the memory problem.
i use the code below to resize picture
+(UIImage*)scaleRetangleToFitLen:(UIImage *)img sWidth:(float)wid sHeight:(float)hei{
CGSize sb = img.size;;
if (img.size.height/img.size.width > hei/wid) {
sb = CGSizeMake(wid,wid*img.size.height/img.size.width);
}else{
sb = CGSizeMake(img.size.width*hei/img.size.height,hei);
}
if (sb.width > img.size.width || sb.height > img.size.height) {
sb = img.size;
}
UIImage* scaledImage = nil;
UIGraphicsBeginImageContext(sb);
[img drawInRect:CGRectMake(0,0, sb.width, sb.height)];
scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
img = nil;
return scaledImage;
}
the memory will increase about 50M when the code
[img drawInRect:CGRectMake(0,0, sb.width, sb.height)]
runs and it will not be setted free even thought the method is finished.
the width and the height is 304*228 ,the image original size is about 3264*2448,the returned image is 304*228;it means the real image i wanted at last is just a 304*228 size image,however it takes 50+M memory..
Is there any way to free the memory the drawInRect: function takes?
(the #autoreleasepool does not work ~ 😢 😢)
When loading an image, iOS usually doesn't decompress it until it really needs to. So the image you pass into your function is most likely a JPEG or PNG image that iOS keeps in memory in it's compressed state. The moment you draw it, it will be decompressed first and therefore the memory increases significantly. I would expect an increase by 3264 x 2448 x 4 = 35MB (and not 50MB).
To get rid of the memory again, you will make sure you release all reference to the image you pass into your function. So the problem is outside the code you show in your question.
For a more specific answer, you'll need to show all the code that works with the original image.
Related
I've noticed some people redraw images on a CGContext to prevent deferred decompression and this has caused a bug in our app.
The bug is that the size of the image professes to remain the same but the CGImageDataProvider data has extra bytes appended to it.
For example, we have a 797x500 PNG image downloaded from the Internet, and the AsyncImageViewredraws and returns the redrawn image.
Here is the code:
UIImage *image = [[UIImage alloc] initWithData:data];
if (image)
{
// Log to compare size and data length...
NSLog(#"BEFORE: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Original code from AsyncImageView
//redraw to prevent deferred decompression
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Log to compare size and data length...
NSLog(#"AFTER: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Some other code...
}
The log shows as follows:
BEFORE: 797.000000 500.000000
LEN 1594000
AFTER: 797.000000 500.000000
LEN 1600000
I decided to print each byte one by one, and sure enough there were twelve 0s appended for each row.
Basically, the redrawing was causing the image data to be that of a 800x500 image. Because of this our app was looking at the wrong pixel when it wanted to look at the 797 * row + columnth pixel.
We're not using any big images so deferred decompression doesn't pose any problems, but should I decide to use this method to redraw images, there's a chance I might introduce a subtle bug.
Does anyone have a solution to this? Or is this a bug introduced by Apple and we can't really do anything?
As you've discovered, rows are padded out to a convenient size. This is generally to make vector algorithms more efficient. You just need to adapt to that layout if you're going to use CGImage this way. You need to call CGImageGetBytesPerRow to find out the actual number of bytes allocated, and then adjust your offsets based on that (bytesPerRow * row + column).
That's probably best for you, but if you need to get rid of the padding, you can do that by creating your own CGBitmapContext and render into it. That's a heavily covered topic around Stack Overflow if you're not familiar with it. For example: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
I am drawing lines on UIImage and my image is in UIImageView. First time drawing process goes fine and I assign the new Image to UIImageView but when I repeat the process it gives me memory warning and app crashes:
Terminating in response to backboard's termination.
I have profiled my app and the CG raster data was taking 273 MB and overall its 341 MB Live Bytes. Also wrapped code in in autorelease pool but didn't get the success. My Code
UIGraphicsBeginImageContext(imageView.image.size);
[imageView.image drawAtPoint:CGPointMake(0, 0)];
context2=UIGraphicsGetCurrentContext();
for(int i=0; i<kmtaObject.iTotalSize; i++)
{
kmtaGroup=&kmtaObject.KMTA_GROUP_OBJ[i];
//NSLog(#"Group # = %d",i);
for (int j=0; j<kmtaGroup->TotalLines; j++)
{
lineObject=&kmtaGroup->Line_INFO_OBJ[j];
// NSLog(#"Line # = %d",j);
// NSLog(#"*****************");
x0 = lineObject->x0;
y0= lineObject->y0;
x1= lineObject->x1;
y1= lineObject->y1;
color= lineObject->Color;
lineWidth= lineObject->LinkWidth;
lineColor=[self add_colorWithRGBAHexValue:color];
linearColor=lineColor;
// Brush width
CGContextSetLineWidth(context2, lineWidth);
// Line Color
CGContextSetStrokeColorWithColor(context2,[linearColor CGColor]);
CGContextMoveToPoint(context2, x0, y0);
CGContextAddLineToPoint(context2, x1, y1);
CGContextStrokePath(context2);
}
}
newImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView.image=newImage;
So it just happened I stumbled onto a similar issue. I was assigning an image to the image view where the image itself was previously retained by other objects, being processed and such... The result was that the image view did indeed leak everyone of those images somehow.
The solution I used was to create a copy of the image on the level of the CGImage before assigning it to the image view. I guess there must be an issue with the bitmaps or something. Anyway try creating a copy like this:
CGImageRef cgCopy = CGImageCreateCopy(originalImage.CGImage);
UIImage *copiedImage = [[UIImage alloc] initWithCGImage:cgCopy scale:originalImage.scale orientation:originalImage.imageOrientation];
CGImageRelease(cgCopy);
imageView.image = copiedImage;
Max resolution used in iOS device is 1024x1024. we can't use more than that size. use 2x and 3x images for respective device sizes.
My iOS app utilizes a loop to cycle through images in a folder.
My application is supposed to loop through a total of 2031 images (sized 1200x900) inside a folder. The images were taken at 8fps and each image will be displayed as the loop continues to simulate a video clip. After the 696th picture, the images will cease to be displayed in the UIImageView although the app will continue looping.
I tested to see if the disconnect was because of the picture not existing
I started the loop at picture 200, but after picture 896 the UIImageView stop displaying the pictures.
The Code:
imgName = [NSString stringWithFormat:#"subject_basline_mat k (%d).png",jojo];
jojo++;
imageToCrop.image = [UIImage imageNamed:imgName]; //imageToCrop is the name of the UIImageView image and it is set to the image file here
imageToCrop.image = [self imageWithImage:imageToCrop.image convertToSize:self.imageToCrop.frame.size]; //Here the image is converted to fit the bounds of the simulator which is 320x240
The code loops due to a timer that loops it about once every 0.8 seconds.
I ran my code with instruments to see if there was a memory problem occurring,and instruments is very heavy on my computer. As such, my application ran quite slowly. However, when I arrived at the 696th picture, the pictures kept displaying themselves. It was almost as if my application running too quickly caused the picture to not be displayed... which I don't really understand.
The only memory heavy part of the image switching seems to be the size conversion step which is called by the line imageToCrop.image = [self imageWithImage:imageToCrop.image convertToSize:self.imageToCrop.frame.size];
imageToCrop.image = [self imageWithImage:imageToCrop.image convertToSize:self.imageToCrop.frame.size];
The method "imageWithImage" is here:
- (UIImage *)imageWithImage:(UIImage *)image convertToSize:(CGSize)size {
#autoreleasepool {
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *destImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return destImage;
}
And the line [image drawInRect:CGRectMake(0, 0, size.width, size.height)]; uses around up the most memory out of all the image management in the app.
Any Ideas as to why my app will only display a certain amount of images?
Try loading the full-size images from the app bundle by URL. For example:
#autoreleasepool {
NSString *imgName = [NSString stringWithFormat:#"subject_basline_mat k (%d)",jojo];
NSURL *imageURL = [[NSBundle mainBundle] URLForResource:imgName withExtension:#"png"];
UIImage *image = [UIImage imageWithContentsOfFile:[imageURL path]];
imageToCrop.image = [self imageWithImage:image convertToSize:self.imageToCrop.frame.size];
}
Almost for sure your problem is [UIImage imageNamed:imgName]. There are hundreds of posts here on the pitfalls of using it. The issue is that it caches the images - its real purpose is for some small number of images in your bundle.
If you have oodles of images, get the path to the image, then get the image through a URL or file pointer. That way its not cached. Note that when you do this, you lose the automatic "get-retina-image-automatically", and so you will need to grab the appropriately sized image depending on whether the device is retina or not.
I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.
I'm using the following code to rotate an image
http://www.platinumball.net/blog/2010/01/31/iphone-uiimage-rotation-and-scaling/
that's one of the few image transformations that I do before uploading an image to the server, I also have some other transformations: normalize, crop, resize.
Each one of the transformations returns an (UIImage*) and I add those functions using a category. I use it like this:
UIImage *img = //image from camera;
img = [[[img normalize] rotate] scale] resize];
[upload img];
After selecting 3~4 photos from the camera and executing the same code each time I get a Memory Warning message in XCode.
I'm guessing I have a memory leak somewhere (even though im using ARC). I'm not very experienced using the xCode debugging tools, so I started printing the retain count after each method.
UIImage *img = //image from camera;
img = [img normalize];
img = [img rotate]; // retain count increases :(
img = [img scale];
img = [img resize];
The only operation that increases the retain count is the rotation. Is this normal?
The only operation that increases the retain count is the rotation. Is this normal?
It's quite possible that the UIGraphicsGetImageFromCurrentImageContext() call in your rotate function ends up retaining the image. If so, it almost certainly also autoreleases the image in keeping with the normal Cocoa memory management rules. Either way, you shouldn't worry about it. As long as your rotate function doesn't itself contain any unbalanced retain (or alloc, new, or copy) calls, you should expect to be free of leaks. If you do suspect a leak, it's better to track it down with Instruments than by watching retainCount yourself.