what is the advantage of imageNamed? - ios

I know that loading an image like this
UIImage *image = [UIImage imageNamed:#"img"];
will cache the image and loading it like this will not
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
people say the it will be faster to access the cached image because iOS will access it from memory and we will not have the overhead of reading and decoding the file. OK, I see that, but suppose I use the second non-cached method to load the image on a view that is a property, like this
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
self.imageView = [[UIImageView alloc] initWithImage:image];
isn't image already on memory? If I want to access it I simply do imageView.image and get it from memory.
I am probably tired but I cannot imagine a single use for the cached version or I am not understanding what this cache means.
Care to explain? Thanks.

Imagine that your image is some icon that you use in 7 different view controllers... You could either load the image once and then pass it to each VC or you could use imageNamed... Your choice.

From the documentation:
This method looks in the system caches for an image object with the specified name and returns that object if it exists. If a matching image object is not already in the cache, this method locates and loads the image data from disk or asset catelog, and then returns the resulting object. You can not assume that this method is thread safe.
Let's say you have an image in your app. When you need to use the image, you use this code:
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
iOS will then look for your image in the app bundle, load it into memory, and then decode it into the UIImage.
However, say you need 10 different objects to use the image and you load it similar to this:
for (ClassThatNeedsImage *object in objects) {
object.image = [UIImage imageWithContentsOfFile:#"img.png"];
}
(This isn't the best example since you could just load the image once and pass it to each of the objects. However, I have had more complex code where that is not an option.)
iOS will then look for the image 10 times, load it into memory 10 times, and then decode it 10 times. However, if you use imageNamed:
for (ClassThatNeedsImage *object in objects) {
object.image = [UIImage imageNamed:#"img"];
}
From Wikipedia:
In computing, a cache is a component that transparently stores data so that future requests for that data can be served faster.
The cache used by UIImage is stored in memory, which is much faster to access than the disk.
The first time through the loop, iOS looks in the cache to see if the image is stored there. Assuming you haven't loaded this image with imageNamed previously, it doesn't find it, so it looks for the image, loads it into memory, decodes it, and then copies it into the cache.
On the other iterations, iOS looks in the cache, finds the image, and copies it into the UIImage object, so it doesn't have to do any hard disk access at all.
If you are only going to use the image once in the lifetime of your app, use imageWithContentsOfFile:. If you are going to use the image multiple times, use imageNamed:.

Related

How to remove UIImageView's last image?

I have a UIImageView that changes pictures depending on a level. But every time I change it, the last image is retained and the Memory usage keeps going up. I think the imageView is stacking the images on top of another. I've tried setting self.imageView.image = nil before setting the new image, but that doesn't seem to work. How do I correctly do this?
if (level == 1) {
self.imageView.image = [UIImage imageNamed:#"1"];
} else if (level == 2) {
self.imageView.image = [UIImage imageNamed:#"2"];
} else if (level == 3) {
self.imageView.image = [UIImage imageNamed:#"3"];
If you have an image file that will only be displayed once and wish to
ensure that it does not get added to the system’s cache, you should
instead create your image using imageWithContentsOfFile:. This will
keep your single-use image out of the system image cache, potentially
improving the memory use characteristics of your app.
Read here: UIImage Documentation
Conclusion, using imageNamed adding image to cache and might eventually increase memory use. Instead, use imageWithContentsOfFile so image won't be adde to cache.
UIImage *image = [UIImage imageWithContentsOfFile:imagePath];
The other answers already say that the image is cached by the system and therefore its memory is not freed. But that is nothing for you to worry about at the time.
If the system feels like it needs more memory it is going to clear caches, effectively clearing the memory of the image and reusing it for something else.
Dont worry about the memory usage unless you see serious problems. And if that is the case, imageNamed is probably the least of your worries.
General note to consider:
Wether or not to make use of the caching depends entirely on your use case. If you show the same image quite often and load it in a few different places, make use of the caching. If you show an image just once, use imageWithContentsOfFile.
The issue you are running into is caused by the invocation of UIImage imageNamed is caching the decoded image data in memory. Load the image with imageWithContentsOfFile instead and only the active image will be held in memory.

Why the UIImage is not released by ARC when I used UIGraphicsGetImageFromCurrentImageContext inside of a block

I try to download an image from server by using the NSURLSessionDownloadTask(iOS 7 API), and inside of the completion block, I want to the original image to be resized and store locally. So I wrote the helper method to create the bitmap context and draw the image, then get the new image from UIGraphicsGetImageFromCurrentImageContext(). The problem is the image is never released every time I do this. However, if I don't use the context and image drawing, things just work fine and no memory increasing issue. There is no CGImageCreate/Release function called, so really nothing to manually release here, and nothing fixed by adding #autoreleasepool here. Is there any way to fix this? I really want to modify the original image after downloading and before storing.
Here is some snippets for the issue:
[self fetchImageByDownloadTaskWithURL:url completion:^(UIImage *image, NSError *error) {
UIImage *modifiedImage = [image resizedImageScaleAspectFitToSize:imageView.frame.size];
// save to local disk
// ...
}];
// This is the resize method in UIImage Category
- (UIImage *)resizedImageScaleAspectFitToSize:(CGSize)size
{
CGSize imageSize = [self scaledSizeForAspectFitToSize:size];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
CGRect imageRect = CGRectMake(0.0, 0.0, imageSize.width, imageSize.height);
[self drawInRect:imageRect]; // nothing will change if make it weakSelf
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
updates:
When I dig into with allocations instrument, I find out that the memory growth is related with "VM: CG raster data". In my storing method, I use the NSCache for a photo memory cache option before store it persistently, and the raster data eats a lot of memory if I use the memory cache. It seems like after the rendered image being cached, all drawing data is also alive in memory until I release all cached images. If I don't memory cache the image, then non of raster data that coming from my image category method will be alive in memory. I just can not figure out why the drawing data is not released after image is being cached? Shouldn't it being released after drawing?
new updates:
I still didn't figure out why raster data is not being released when image for drawing is alive, and there is no analyze warning about this for sure. So I guess I just have to not cache the huge image for drawing to fit the big size, and remove cached drawing images when I don't want to use them any more. If I call [UIImage imageNamed:] and make it drawing, it seems never being released with raster data together since the image is system cached. So I called [UIImage imageWithContentsOfFile:] instead. Eventually the memory performs well. Other memory growth are something called non-object in allocations instrument which I have no idea currently. The memory warning simulation will release the system cached image created by [UIImage imageNamed:]. But for raster data, I will give some more tests on tomorrow and see.
Try making your category method a class method instead. Perhaps the leak is the original CGImage data which you are overwriting when you call [self drawInRect:imageRect];.

Reducing memory usage with UIImagePickerController

In my app, the user can take multiple images using the UIImagePickerController, and those images are then displayed one by one in the view.
I've been having some trouble with memory management. With cameras on today's phones quickly rising in megapixels, UIImages returned from UIImagePickerController are memory hogs. On my iPhone 4S, the UIImages are around 5MB; I can hardly imagine what they're like on the newer and future models.
A friend of mine said that the best way to handle UIImages was to immediately save them to a JPEG file in my app's document directory and to release the original UIImage as soon as possible. So this is what I've been trying to do. Unfortunately, even after saving the UIImage to a JPEG and leaving no references to it in my code, it is not being garbage collected.
Here are the relevant sections of my code. I am using ARC.
// Entry point: UIImagePickerController delegate method
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
// Process the image. The method returns a pathname.
NSString* path = [self processImage:[info objectForKey:UIImagePickerControllerOriginalImage]];
// Add the image to the view
[self addImage:path];
}
-(NSString*) processImage:(UIImage*)image {
// Get a file path
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString* filename = [self makeImageFilename]; // implementation omitted
NSString* imagePath = [documentsDirectory stringByAppendingPathComponent:filename];
// Get the image data (blocking; around 1 second)
NSData* imageData = UIImageJPEGRepresentation(image, 0.1);
// Write the data to a file
[imageData writeToFile:imagePath atomically:YES];
// Upload the image (non-blocking)
[self uploadImage:imageData withFilename:filename];
return imagePath;
}
-(void) uploadImage:(NSData*)imageData withFilename:(NSString*)filename {
// this sends the upload job (implementation omitted) to a thread
// pool, which in this case is managed by PhoneGap
[self.commandDelegate runInBackground:^{
[self doUploadImage:imageData withFilename:filename];
}];
}
-(void) addImage:(NSString*)path {
// implementation omitted: make a UIImageView (set bounds, etc). Save it
// in the variable iv.
iv.image = [UIImage imageWithContentsOfFile:path];
[iv setNeedsDisplay];
NSLog(#"Displaying image named %#", path);
self.imageCount++;
}
Notice how the processImage method takes a reference to a UIImage, but it uses it for only one thing: making the NSData* representation of that image. So, after the processImage method is complete, the UIImage should be released from memory, right?
What can I do to reduce the memory usage of my app?
Update
I now realize that a screenshot of the allocations profiler would be helpful for explaining this question.
Your processImage method is not your problem.
We can test your image-saving code by transplanting it into Apple's PhotoPicker demo app
Conveniently, Apple's sample project is very similar to yours, with a method to take repeated pictures on a timer. In the sample, the images are not saved to the filesystem, but accumulated in memory. It comes with this warning:
/*
Start the timer to take a photo every 1.5 seconds.
CAUTION: for the purpose of this sample, we will continue to take pictures indefinitely.
Be aware we will run out of memory quickly. You must decide the proper threshold number of photos allowed to take from the camera.
One solution to avoid memory constraints is to save each taken photo to disk rather than keeping all of them in memory.
In low memory situations sometimes our "didReceiveMemoryWarning" method will be called in which case we can recover some memory and keep the app running.
*/
With your method added to Apple's code, we can address this issue.
The imagePicker delegate method looks like this:
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage];
[self.capturedImages removeAllObjects]; // (1)
[self.imagePaths addObject:[self processImage:image]]; //(2)
[self.capturedImages addObject:image];
if ([self.cameraTimer isValid])
{
return;
}
[self finishAndUpdate]; //(3)
}
(1) - our addition, to flush the live memory on each image capture event
(2) - our addition, to save image to filesystem and build a list of filesystem paths.
(3) - for our tests we are using the cameraTimer to take repeat images, so finishAndUpdate does not get called.
I have used your processImage: method as is, with the line:
[self uploadImage:imageData withFilename:filename];
commented out.
I have also added a small makeImageFileName method:
static int imageName = 0;
-(NSString*)makeImageFilename {
imageName++;
return [NSString stringWithFormat:#"%d.jpg",imageName];
}
These are the only additions I have made to Apple's code.
Here is the memory footprint of Apple's original code (cameraTimer run without (1) and (2))
Memory climbed to ~140MB after capture of ~40 images
Here is the memory footprint with the additions (cameraTimer run with (1) and (2))
The filesaving method is fixing the memory issue: memory is flat with spikes of ~30MB per image capture.
These test were run on an iPhone5S. Uncompressed images are 3264 x 2448 px, which should be around 24mB (24-bit RGB). Jpeg compressed (filesystem) size ranges between 250kB (0.1 quality, as per your code) to 1-2mB (0.7 quality) upto ~6mB (1.0 quality).
In a comment to your question, you suggest that a re-loaded image will benefit from that compression. This is not the case: when an image is loaded into memory it must first be uncompressed. It's memory footprint will be approximately equal to pixels x colours x bit-depth per colour - regardless of the way the image is stored on disk. As jrturton has pointed out, this at least suggests that you should avoid loading an image at greater resolution than you need for display. Say you have a full-screen (retina) imageView of 832 x 640, you are wasting memory loading an image larger than that if your user cannot zoom in. That's a live memory footprint of ~1.6mB, a huge improvement on your 24mMB original (but this is a digression from your main issue).
As processImage doesn't seem to be the cause of your memory trouble, you should look at other possibilities:
1/ You don't have a memory issue. How are you profiling the app?
2/ One of addImage or uploadImage is retaining memory. Try commenting each out in turn to identify which.
3/ The problem is elsewhere (something managed by PhoneGap?)
As regards those memory spikes, these are caused by the image-to-data jpeg compression line:
NSData* imageData = UIImageJPEGRepresentation(image, 0.1);
Under the hood, that is ImageIO, and it is probably unavoidable when using ImagePickerController. See here: Most memory efficient way to save a photo to disk on iPhone? If you switch to AVFoundation you can get at the image as unconverted NSData so you could avoid the spike.

UICollectionView bad performance with UIImageViews with Core Image-manipulated images

I've got a UICollectionView in my app whose cells mainly consist of UIImageViews containing images that have been manipulated with Core Image to have less color saturation. Performance is absolutely horrible when scrolling. When I profile it, the huge majority of time spent (80% or so) is not in my code or even in my stack. It all appears to be in Core Animation code. Might anyone know why this could be?
In my UICollectionViewCell subclass I have something like this:
UIImage *img = [self obtainImageForCell];
img = [img applySaturation:0.5];
self.imageView.image = img;
applySaturation looks like this:
CIImage *image = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:image forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:saturation] forKey:#"inputSaturation"];
return [UIImage imageWithCIImage:filter.outputImage];
My only guess is that Core Animation doesn't play well with Core Image. The Apple docs say this about CIImage:
Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.
Doing this evaluation at the last minute while animating might be tricky.
I had the exact same problem, cured by avoiding the triggering of Core Image filters during cell updates.
The Apple docs stuff about lazy evaluation / recipes is, I think, more directed at the idea that you can chain core image filters together very efficiently. However, when you want to display the results of a core image filter chain, the thing needs to be evaluated then and there, which is not a good situation if the 'then and there' is during a rapidly-scrolling view and the filter in question requires heavy processing (many of them do).
You can try fiddling with GPU vs CPU processing, but I have found that the overhead of moving image data to and from CIImage can be a more significant overhead (see my answer here)
My recommendation is to treat this kind of processing the same way as you would treat the populating of a scrolling view with online images - i.e. process asynchronously, use placeholders (eg the preprocessed image), and cache results for reuse.
update
in reply to your comment:
Applicable filters are applied when you extract data from the CIImage - for example, with imageWithCIImage: [warning - this is my inference, I have not tested].
But this is not your problem... you need to process your images on a backgound thread as the processing will take time that will hold up the scrolling. Meanwhile display something else in the scrolling cell, such as a flat color or - better - the UIImage you are feeding into your CIImage for filtering. Updated the cell when the processing is done (check to see if it still needs updating, it may have scrolled offscreen by then). Save the filtered image in some kind of persistent store so that you don't need to filter it a second time, and check the cache whenever you need to display the image again before reprocessing from scratch.
I also had this exact problem – right down to wanting to desaturate an image! – and filtering once and caching the result (even as a UIImage) didn't help.
The problem, as others have mentioned, is that a CIImage encapsulates the information required to generate an image, but isn't actually an image itself. So when scrolling, the on-screen image needs to be generated on the fly, which kills performance. The same turns out to be true of a UIImage created using the imageWithCIImage:scale:orientation: method, so creating this once and reusing it also doesn't help.
The solution is to force CoreGraphics to actually render the image before saving it as a UIImage. This gave a huge improvement in scrolling performance for me. In your case, the applySaturation method might look like this:
CIImage *image = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:image forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:saturation] forKey:#"inputSaturation"];
CGImageRef cgImage = [[CIContext contextWithOptions:nil] createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return image;
You might also consider caching the CIFilter and/or the CIContext if you're going to be using this method a lot, since these can be expensive to create.

resize image data on iPhone

I get image data from NSURLConnection, and store the data using NSMutableData. The point is that I want to store a smaller size of the image.
Now I use the following code to get a smaller image:
UIImage *temp = [[UIImage alloc] initWithData: imageData];
image = [temp resizedImage:CGSizeMake(72, 72) interpolationQuality:kCGInterpolationHigh];
smallImageData = UIImagePNGRepresentation( image);
My question is can I get smallImageData directly from imageData?
As I think my method will cost a lot if I have numbers of connection to get the image data.
This is a correct method to downsize images. If this is in a tight loop, you can wrap it in an autorelease pool to make sure any temporary objects are collected.
If you are running into issues with memory then serialize your requests to run one at a time.

Resources