resize image data on iPhone - ios

I get image data from NSURLConnection, and store the data using NSMutableData. The point is that I want to store a smaller size of the image.
Now I use the following code to get a smaller image:
UIImage *temp = [[UIImage alloc] initWithData: imageData];
image = [temp resizedImage:CGSizeMake(72, 72) interpolationQuality:kCGInterpolationHigh];
smallImageData = UIImagePNGRepresentation( image);
My question is can I get smallImageData directly from imageData?
As I think my method will cost a lot if I have numbers of connection to get the image data.

This is a correct method to downsize images. If this is in a tight loop, you can wrap it in an autorelease pool to make sure any temporary objects are collected.
If you are running into issues with memory then serialize your requests to run one at a time.

Related

Why the UIImage is not released by ARC when I used UIGraphicsGetImageFromCurrentImageContext inside of a block

I try to download an image from server by using the NSURLSessionDownloadTask(iOS 7 API), and inside of the completion block, I want to the original image to be resized and store locally. So I wrote the helper method to create the bitmap context and draw the image, then get the new image from UIGraphicsGetImageFromCurrentImageContext(). The problem is the image is never released every time I do this. However, if I don't use the context and image drawing, things just work fine and no memory increasing issue. There is no CGImageCreate/Release function called, so really nothing to manually release here, and nothing fixed by adding #autoreleasepool here. Is there any way to fix this? I really want to modify the original image after downloading and before storing.
Here is some snippets for the issue:
[self fetchImageByDownloadTaskWithURL:url completion:^(UIImage *image, NSError *error) {
UIImage *modifiedImage = [image resizedImageScaleAspectFitToSize:imageView.frame.size];
// save to local disk
// ...
}];
// This is the resize method in UIImage Category
- (UIImage *)resizedImageScaleAspectFitToSize:(CGSize)size
{
CGSize imageSize = [self scaledSizeForAspectFitToSize:size];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
CGRect imageRect = CGRectMake(0.0, 0.0, imageSize.width, imageSize.height);
[self drawInRect:imageRect]; // nothing will change if make it weakSelf
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
updates:
When I dig into with allocations instrument, I find out that the memory growth is related with "VM: CG raster data". In my storing method, I use the NSCache for a photo memory cache option before store it persistently, and the raster data eats a lot of memory if I use the memory cache. It seems like after the rendered image being cached, all drawing data is also alive in memory until I release all cached images. If I don't memory cache the image, then non of raster data that coming from my image category method will be alive in memory. I just can not figure out why the drawing data is not released after image is being cached? Shouldn't it being released after drawing?
new updates:
I still didn't figure out why raster data is not being released when image for drawing is alive, and there is no analyze warning about this for sure. So I guess I just have to not cache the huge image for drawing to fit the big size, and remove cached drawing images when I don't want to use them any more. If I call [UIImage imageNamed:] and make it drawing, it seems never being released with raster data together since the image is system cached. So I called [UIImage imageWithContentsOfFile:] instead. Eventually the memory performs well. Other memory growth are something called non-object in allocations instrument which I have no idea currently. The memory warning simulation will release the system cached image created by [UIImage imageNamed:]. But for raster data, I will give some more tests on tomorrow and see.
Try making your category method a class method instead. Perhaps the leak is the original CGImage data which you are overwriting when you call [self drawInRect:imageRect];.

what is the advantage of imageNamed?

I know that loading an image like this
UIImage *image = [UIImage imageNamed:#"img"];
will cache the image and loading it like this will not
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
people say the it will be faster to access the cached image because iOS will access it from memory and we will not have the overhead of reading and decoding the file. OK, I see that, but suppose I use the second non-cached method to load the image on a view that is a property, like this
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
self.imageView = [[UIImageView alloc] initWithImage:image];
isn't image already on memory? If I want to access it I simply do imageView.image and get it from memory.
I am probably tired but I cannot imagine a single use for the cached version or I am not understanding what this cache means.
Care to explain? Thanks.
Imagine that your image is some icon that you use in 7 different view controllers... You could either load the image once and then pass it to each VC or you could use imageNamed... Your choice.
From the documentation:
This method looks in the system caches for an image object with the specified name and returns that object if it exists. If a matching image object is not already in the cache, this method locates and loads the image data from disk or asset catelog, and then returns the resulting object. You can not assume that this method is thread safe.
Let's say you have an image in your app. When you need to use the image, you use this code:
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
iOS will then look for your image in the app bundle, load it into memory, and then decode it into the UIImage.
However, say you need 10 different objects to use the image and you load it similar to this:
for (ClassThatNeedsImage *object in objects) {
object.image = [UIImage imageWithContentsOfFile:#"img.png"];
}
(This isn't the best example since you could just load the image once and pass it to each of the objects. However, I have had more complex code where that is not an option.)
iOS will then look for the image 10 times, load it into memory 10 times, and then decode it 10 times. However, if you use imageNamed:
for (ClassThatNeedsImage *object in objects) {
object.image = [UIImage imageNamed:#"img"];
}
From Wikipedia:
In computing, a cache is a component that transparently stores data so that future requests for that data can be served faster.
The cache used by UIImage is stored in memory, which is much faster to access than the disk.
The first time through the loop, iOS looks in the cache to see if the image is stored there. Assuming you haven't loaded this image with imageNamed previously, it doesn't find it, so it looks for the image, loads it into memory, decodes it, and then copies it into the cache.
On the other iterations, iOS looks in the cache, finds the image, and copies it into the UIImage object, so it doesn't have to do any hard disk access at all.
If you are only going to use the image once in the lifetime of your app, use imageWithContentsOfFile:. If you are going to use the image multiple times, use imageNamed:.

Reducing memory usage with UIImagePickerController

In my app, the user can take multiple images using the UIImagePickerController, and those images are then displayed one by one in the view.
I've been having some trouble with memory management. With cameras on today's phones quickly rising in megapixels, UIImages returned from UIImagePickerController are memory hogs. On my iPhone 4S, the UIImages are around 5MB; I can hardly imagine what they're like on the newer and future models.
A friend of mine said that the best way to handle UIImages was to immediately save them to a JPEG file in my app's document directory and to release the original UIImage as soon as possible. So this is what I've been trying to do. Unfortunately, even after saving the UIImage to a JPEG and leaving no references to it in my code, it is not being garbage collected.
Here are the relevant sections of my code. I am using ARC.
// Entry point: UIImagePickerController delegate method
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
// Process the image. The method returns a pathname.
NSString* path = [self processImage:[info objectForKey:UIImagePickerControllerOriginalImage]];
// Add the image to the view
[self addImage:path];
}
-(NSString*) processImage:(UIImage*)image {
// Get a file path
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString* filename = [self makeImageFilename]; // implementation omitted
NSString* imagePath = [documentsDirectory stringByAppendingPathComponent:filename];
// Get the image data (blocking; around 1 second)
NSData* imageData = UIImageJPEGRepresentation(image, 0.1);
// Write the data to a file
[imageData writeToFile:imagePath atomically:YES];
// Upload the image (non-blocking)
[self uploadImage:imageData withFilename:filename];
return imagePath;
}
-(void) uploadImage:(NSData*)imageData withFilename:(NSString*)filename {
// this sends the upload job (implementation omitted) to a thread
// pool, which in this case is managed by PhoneGap
[self.commandDelegate runInBackground:^{
[self doUploadImage:imageData withFilename:filename];
}];
}
-(void) addImage:(NSString*)path {
// implementation omitted: make a UIImageView (set bounds, etc). Save it
// in the variable iv.
iv.image = [UIImage imageWithContentsOfFile:path];
[iv setNeedsDisplay];
NSLog(#"Displaying image named %#", path);
self.imageCount++;
}
Notice how the processImage method takes a reference to a UIImage, but it uses it for only one thing: making the NSData* representation of that image. So, after the processImage method is complete, the UIImage should be released from memory, right?
What can I do to reduce the memory usage of my app?
Update
I now realize that a screenshot of the allocations profiler would be helpful for explaining this question.
Your processImage method is not your problem.
We can test your image-saving code by transplanting it into Apple's PhotoPicker demo app
Conveniently, Apple's sample project is very similar to yours, with a method to take repeated pictures on a timer. In the sample, the images are not saved to the filesystem, but accumulated in memory. It comes with this warning:
/*
Start the timer to take a photo every 1.5 seconds.
CAUTION: for the purpose of this sample, we will continue to take pictures indefinitely.
Be aware we will run out of memory quickly. You must decide the proper threshold number of photos allowed to take from the camera.
One solution to avoid memory constraints is to save each taken photo to disk rather than keeping all of them in memory.
In low memory situations sometimes our "didReceiveMemoryWarning" method will be called in which case we can recover some memory and keep the app running.
*/
With your method added to Apple's code, we can address this issue.
The imagePicker delegate method looks like this:
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage];
[self.capturedImages removeAllObjects]; // (1)
[self.imagePaths addObject:[self processImage:image]]; //(2)
[self.capturedImages addObject:image];
if ([self.cameraTimer isValid])
{
return;
}
[self finishAndUpdate]; //(3)
}
(1) - our addition, to flush the live memory on each image capture event
(2) - our addition, to save image to filesystem and build a list of filesystem paths.
(3) - for our tests we are using the cameraTimer to take repeat images, so finishAndUpdate does not get called.
I have used your processImage: method as is, with the line:
[self uploadImage:imageData withFilename:filename];
commented out.
I have also added a small makeImageFileName method:
static int imageName = 0;
-(NSString*)makeImageFilename {
imageName++;
return [NSString stringWithFormat:#"%d.jpg",imageName];
}
These are the only additions I have made to Apple's code.
Here is the memory footprint of Apple's original code (cameraTimer run without (1) and (2))
Memory climbed to ~140MB after capture of ~40 images
Here is the memory footprint with the additions (cameraTimer run with (1) and (2))
The filesaving method is fixing the memory issue: memory is flat with spikes of ~30MB per image capture.
These test were run on an iPhone5S. Uncompressed images are 3264 x 2448 px, which should be around 24mB (24-bit RGB). Jpeg compressed (filesystem) size ranges between 250kB (0.1 quality, as per your code) to 1-2mB (0.7 quality) upto ~6mB (1.0 quality).
In a comment to your question, you suggest that a re-loaded image will benefit from that compression. This is not the case: when an image is loaded into memory it must first be uncompressed. It's memory footprint will be approximately equal to pixels x colours x bit-depth per colour - regardless of the way the image is stored on disk. As jrturton has pointed out, this at least suggests that you should avoid loading an image at greater resolution than you need for display. Say you have a full-screen (retina) imageView of 832 x 640, you are wasting memory loading an image larger than that if your user cannot zoom in. That's a live memory footprint of ~1.6mB, a huge improvement on your 24mMB original (but this is a digression from your main issue).
As processImage doesn't seem to be the cause of your memory trouble, you should look at other possibilities:
1/ You don't have a memory issue. How are you profiling the app?
2/ One of addImage or uploadImage is retaining memory. Try commenting each out in turn to identify which.
3/ The problem is elsewhere (something managed by PhoneGap?)
As regards those memory spikes, these are caused by the image-to-data jpeg compression line:
NSData* imageData = UIImageJPEGRepresentation(image, 0.1);
Under the hood, that is ImageIO, and it is probably unavoidable when using ImagePickerController. See here: Most memory efficient way to save a photo to disk on iPhone? If you switch to AVFoundation you can get at the image as unconverted NSData so you could avoid the spike.

Issue using GLKTextureLoader and imageNamed: when loading same image multiple times

I get strange behaviour when I use the following code to load an image multiple times:
NSDictionary *options = #{GLKTextureLoaderOriginBottomLeft: #YES};
textureInfo = [GLKTextureLoader textureWithCGImage:[UIImage imageNamed:#"name"].CGImage
options:options
error:nil];
It works as expected when I run load the image the first time, but when I try to load the same image again it's drawn upside down.
I think this has to do with the fact that it's actually the same CGImage that gets passed to the texture loader because of the use of imageNamed:. The flip transformation is therefore applied a second time on the same image.
Is there a way to get around this issue?
I guess you could flip the image, and load it the first time when your program starts.
Or not use imageNamed. Or keep the texture in memory so you only have to load it once.

Memory Management for CGImageCreateWithImageInRect used for CAKeyFramAnimation

I'm using CAKeyFrameAnimation to animate few png images. I cut them from one png image by using CGImageCreateWithImageInRect
When I run Analyze function in XCode it shows me potential memory leaks. They are beause of CGImageCreateWithImageInRect. But when I use CGImageRelease after the animation object is created, no images are shown (I know why, but when I don't use release, there are leaks).
Could someone explain me this memory issue situation? And what is the best solution? I was thinking about creating UIImages with CGImageRefs for each "cut". But CAKeyFrameAnimation uses field of CGImageRef so I thought it is not necessary to create that UIImages.
UIImage *source = [UIImage imageNamed:#"my_anim_actions.png"];
cutRect = CGRectMake(0*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image1 = CGImageCreateWithImageInRect([source CGImage], cutRect);
cutRect = CGRectMake(1*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image2 = CGImageCreateWithImageInRect([source CGImage], cutRect);
NSArray *images = [[NSArray alloc] initWithObjects:(__bridge id)image1, (__bridge id)image2, (__bridge id)image2, (__bridge id)image1, (__bridge id)image2, (__bridge id)image1, nil];
CAKeyframeAnimation *myAnimation = [CAKeyframeAnimation animationWithKeyPath: #"contents"];
myAnimation.calculationMode = kCAAnimationDiscrete;
myAnimation.duration = kMyTime;
myAnimation.values = images; // NSArray of CGImageRefs
[myAnimation setValue:#"ANIMATION_MY" forKey:#"MyAnimation"];
myAnimation.removedOnCompletion = NO;
myAnimation.fillMode = kCAFillModeForwards;
CGImageRelease(image1);CGImageRelease(image2);//YES or NO
Two solutions:
First: Use [UIImageView animationImages] to animate between multiple images.
Second: Store the images as Ivars and release them in dealloc of your class.
Not sure if this helps, but I ran into something similar where using CGImageCreateWithImageInRect to take sections of an image and then showing those in image views caused huge memory usages. And I found that one thing that improved memory usage a lot was encoding the sub-image into PNG data and re-decoding it back into an image again.
I know, that sounds ridiculous and completely redundant, right? But it helped a lot. According to the documentation of CGImageCreateWithImageInRect,
The resulting image retains a reference to the original image, which
means you may release the original image after calling this function.
It seems as though when the image is shown, the UI copies the image data, including copying the original image data that it references too; that's why it uses so much memory. But when you write it to PNG and back again, you create an independent image data, that is smaller and does not depend on the original. This is just my guess.
Anyway, try it and see if it helps.

Resources