CALayer memory management ram - ios

I have more than 60 UIImageViews, and to each of them I apply a CLlayer in the following way:
image1.layer.cornerRadius = 6.0;
image1.layer.masksToBounds = YES;
How much memory does CLLayer use? Since I do it to contain the image subviews in the image (not visible outside the image, while visible without applying the layer), is it better to remove it and use another code? if so which?

Let the system worry about memory management. 60 views is not so many (6000 might be). Each UIImageView is backed by a UIImage and a CGImageRef, and the system can purge the CGImageRef as needed to make space since it can reload it using cached info in the UIImage.
If you want to profile your memory usage, then use ObjectAlloc in Instruments - if you just use all Instruments defaults you will still get a lot of useful information.

Related

Draw elements with Core Graphics or provide images ?

I am not quite sure whether it is beneficial to draw the visual elements of my app with Core Graphics instead of providing the images. In terms of memory preservation and runtime speed which way is better ?
In terms of memory preservation and runtime speed which way is better?
+UIImage:imageNamed: is most efficient. It caches images, i.e. only one copy of an image is in memory and the image is decoded (from its PNG, JPEG, TIFF, etc. data) when it is needed and kept around for future reuse. If you are worried about memory use, iOS will purge the UIImage cache if you are running low or go into the background.
Using Core Graphics to draw an image does not do any caching for you, unless you write the code to draw your image into a context, save the context as a bitmap, cache the bitmap and then reuse it later on. So you end up drawing the same thing over and over every time it is needed. For example, if you override UIView's -drawRect: to draw imagery, then during animations it will be called for every single frame (60 times a second). This needlessly burns CPU cycles and battery life.
Bottom line is it depends on what your app is and does.
If you dont need your images to Change or animate much ,then you shoud directly use an image.Dont worry so much about performace unless you have like 100 images in a single view controller.
If iPhone can handle games like need for speed , to run an app with various images is an easy task.
Hope this helps.

What is the most memory-efficient way of downscaling images on iOS?

In background thread, my application needs to read images from disk, downscale them to the size of screen (1024x768 or 2048x1536) and save them back to disk. Original images are mostly from the Camera Roll but some of them may have larger sizes (e.g. 3000x3000).
Later, in a different thread, these images will frequently get downscaled to different sizes around 500x500 and saved to the disk again.
This leads me to wonder: what is the most efficient way to do this in iOS, performance and memory-wise? I have used two different APIs:
using CGImageSource and CGImageSourceCreateThumbnailAtIndex from ImageIO;
drawing to CGBitmapContext and saving results to disk with CGImageDestination.
Both worked for me but I'm wondering if they have any difference in performance and memory usage. And if there are better options, of course.
While I can't definetly say it will help, I think it's worth trying to push the work to the GPU. You can either do that yourself by rendering a textured quad at a given size, or by using GPUImage and its resizing capabilities. While it has some texture size limitations on older devices, it should have much better performance than CPU based solution
With libjpeg-turbo you can use the scale_num and scale_denom fields of jpeg_decompress_struct, and it will decode only needed blocks of an image. It gave me 250 ms decoding+scaling time in background thread on 4S with 3264x2448 original image (from camera, image data placed in memory) to iPhone's display resolution. I guess it's OK for an image that large, but still not great.
(And yes, that is memory efficient. You can decode and store the image almost line by line)
What you said on twitter does not match your question.
If you are having memory spikes, look at Instruments to figure out what is consuming the memory. Just the data alone for your high resolution image is 10 megs, and your resulting images are going to be about 750k, if they contain no alpha channel.
The first issue is keeping the memory usage low, for that, make sure that all of the images that you load are disposed as soon as you are done using them, that will ensure that the underlying C/Objective-C API disposes the memory immediately, instead of waiting for the GC to run, so something like:
using (var img = UIImage.FromFile ("..."){
using (var scaled = Scaler (img)){
scaled.Save (...);
}
}
As for the scaling, there are a variety of ways of scaling the images. The simplest way is to create a context, then draw on it, and then get the image out of the context. This is how MonoTouch's UIImage.Scale method is implemented:
public UIImage Scale (SizeF newSize)
{
UIGraphics.BeginImageContext (newSize);
Draw (new RectangleF (0, 0, newSize.Width, newSize.Height));
var scaledImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return scaledImage;
}
The performance will be governed by the context features that you enable. For example, a higher-quality scaling would require changing the interpolation quality:
context.InterpolationQuality = CGInterpolationQuality.High
The other option is to run your scaling not on the CPU, but on the GPU. To do that, you would use the CoreImage API and use the CIAffineTransform filter.
As to which one is faster, it is something left for someone else to benchmark
CGImage Scale (string file)
{
var ciimage = CIImage.FromCGImage (UIImage.FromFile (file));
// Create an AffineTransform that makes the image 1/5th of the size
var transform = CGAffineTransform.MakeScale (0.5f, 0.5f);
var affineTransform = new CIAffineTransform () {
Image = ciimage,
Transform = transform
};
var output = affineTransform.OutputImage;
var context = CIContext.FromOptions (null);
return context.CreateCGImage (output, output.Extent);
}
If either is more efficient of the two then it'll be the former.
When you create a CGImageSource you create just what the name says — some sort of opaque thing from which an image can be obtained. In your case it'll be a reference to a thing on disk. When you ask ImageIO to create a thumbnail you explicitly tell it "do as much as you need to output this many pixels".
Conversely if you draw to a CGBitmapContext then at some point you explicitly bring the whole image into memory.
So the second approach definitely has the whole image in memory at once at some point. Conversely the former needn't necessarily (in practice there'll no doubt be some sort of guesswork within ImageIO as to the best way to proceed). So across all possible implementations of the OS either the former will be advantageous or there'll be no difference between the two.
I would try using a c-based library like leptonica. I'm not sure whether ios optimizes Core Graphics with the relatively new Accelerate Framework, but CoreGraphics probably has more overhead involved just to re-size an image. Finally... If you want to roll your own implementation try using vImageScale_??format?? backed with some memory mapped files, I can't see anything being faster.
http://developer.apple.com/library/ios/#documentation/Performance/Conceptual/vImage/Introduction/Introduction.html
PS. Also make sure to check the compiler optimization flags.
I think if you want to save the memory you can read the source image from tile to tile and compress the tile and save to the destination tile.
There is an example from apple. It is the implementation of the way.
https://developer.apple.com/library/ios/samplecode/LargeImageDownsizing/Introduction/Intro.html
You can download this project and run it. It is MRC so you can use it very smoothly.
May it help. :)

Is there any performance or memory overhead using UIImage or UILabel or NSString in Quartz2D

Is there any performance or memory overhead using UIImage or UILabel or NSString in Quartz2D.
If there is no difference then why not use UIImage, UILabel etc.
Can any one send me the snippet how to draw image without using UIImage.
Thanks in advance,
Regards.
please correct me if you see any stupid mistake I am new to this, trying to learn it.
A label draws a string. You can't have a label without a string; if you did, what would it draw?
Last I checked, UILabel uses UIWebView internally, so you could indeed make a more efficient version. One way would be to use Core Text; the other would be to use a CATextLayer.
As for UIImage, technically yes; a UIImage wraps a CGImage, so cutting out the UIImage would save some memory. However, 99% of the memory used by an image is for the image itself, its pixels; those are contained within the CGImage, and the UIImage is tiny compared to it. You have better things to spend your time cutting.
Rather than guessing and/or relying on generalities for your optimizations, use Instruments to find out exactly what your application is spending its memory on. Once you know with hard evidence where all your memory is going, you'll know where you can look for savings.
Wrappers generally won't increase memory usage much; objects are small, so you'll only pay a lot for them if you create a lot of them. Look instead to shortening their lifetimes; don't hold onto objects (in caches, collection objects, or directly in instance variables/properties) any longer than you need to.

UIImageView not releasing image data properly?

In its simplest form, my app displays 10 UIImageViews, each containing an image. Even with all UIImageViews containing images, my app uses a small enough memory footprint. However, there is a button to clear all the UIImageViews by setting all their images to nil. The problem is, when checking Memory Monitor in Instruments, the memory held by the UIImageViews is NOT going away. This doesn't appear in the Allocations instrument, confirming the remaining memory footprint is not an object, but instead graphics-based memory. If I resize the images to something smaller or larger, the memory remaining is also smaller or larger, respectively.
Why is the image data sticking around after the UIImageView's image has been set to nil?
I believe UIKit keeps a cache of images for reuse. UIImageView might be releasing the object, but a copy is kept around for performance reasons.
These images, though, should be released on receiving a memory warning. If they're not, there's two places I'd check:
Make sure the UIImageView is being dealloc'd. Use Allocations Instrument to profile your app and do whatever you need to do in the program to load those images. Then unload the images and do a search for UIImageView. As long as you're sure your program should have released all of them, if you find any in the search you know something is wrong.
I'd also check any places the image was created, for example: UIImage = [UIImage imageName:#"Foo.jpg"]; Make sure these are also being released. You can use allocations to find UIImage classes, but it'll be harder to weed out the ones that should/should not be there.
Run the static analyzer: In Xcode 4 it's under Products -> Analyze. This is a great tool for finding logic errors, over/under release (if you not using ARC) etc.
Until actual UIImageViews are themselves released, their memory will remain allocated. Additionally, if you're using convenience methods on UIImage to obtain your images, eg:
UIImage *myImage = [UIImage imageNamed:#"myImage"];
Note that your image may cached behind-the-scenes by iOS, and so even if the image is being released by you, the memory footprint may still reflect the presence of the image in memory (eventually iOS will release it, so this shouldn't adversely impact your resource consumption).

UIImage implementation / paging?

From the UIImage documentation:
In low-memory situations, image data may be purged from a UIImage
object to free up memory on the system. This purging behavior affects
only the image data stored internally by the UIImage object and not
the object itself. When you attempt to draw an image whose data has
been purged, the image object automatically reloads the data from its
original file.
What about UIImages that were not loaded from a file, but instead drawn, say, with UIGraphicsGetImageFromCurrentContext()?
I'm trying to come up with ways to optimize the memory usage of UITableViewCells with UIImageViews containing UIImages as the cells enter and are pulled from the reuse queue.
Thoughts?
Mike,
My understanding is that CGImage data is gone, so I think (this for your custom drawn image point) you are out of luck?
I actually just dealt with a similar issue with UITableViews. One thing that I did for performance was to create cells with a Nib; this was the single largest boost in performance of all the things I did, so if you are not using a Nib consider it.
You might also consider some form of preloading if you have that much data. I don't know what you are trying to implement, so this may or may be applicable.
One other note, after purging UIImage's reloading them from their files is a significant memory hit, so if you are at that point you really need to just look at memory usage overall.

Resources