Best practice for animated image in iOS - UIImage or UIImageView? - ios

Sorry if the question is a bit subjective, but I couldn't find anything about the topic.
The question is simple:
Which one of the following alternatives are "best", (i.e. best performance). I want to show the image in an UIImageView regardless of the chosen solution.
self.imageView.image = [UIImage animatedImageNamed:#"imagename-" duration:2.0f];
or
self.imageView.animationImages = listOfMyImageNames;
self.imageView.animationRepeatCount = 0;
self.imageView.animationDuration = 2;
[self.imageView startAnimating];
I know that the UIImageView-solution gives more flexibility with number of loops etc, but I want an infinite animation so that doesn't matter in this case.

Of the two options you describe, the animatedImageNamed approach is better than attempting to use animationImages. The reason is that the animationImages approach will crash your device if the series of images is too long or the width and height are too large. The problem is that animationImages will eat up memory at a shocking rate as opposed to the animatedImageNamed API which has a better memory bound. Neither approach is actually the "best" in terms of CPU usage and memory usage as there are significantly better implementations available.

Related

UIImageView + UIImage vs CALayer + Content Efficiency

Currently, I have an UIImageView that update its UIImage every few seconds or so. This is an endless process in a very heavy UI.
Given that using CALayers where ever possible over UIView's are always lighter in weight, I am wondering if I convert the UIImageView to CALayer and setting UIImage to setting content of CALayer?
Current
//Every 2 seconds
myImageView.image = UIImage(cgImage: myCGImage)
Suggestion
//Every 2 seconds
myLayer.content = myCGImage
Since UIImageView acts slightly different, I am wondering which method would be more efficient on the CPU/GPU overall.
In general, UIView objects are fairly thin wrappers around CALayers. UIImageView is no exception to that. Most of the "heavy lifting" of decoding and displaying the image is done by the CALayer in both cases, so I doubt if you'll see much difference.
UIImageView is easier to use, and the resulting code is easier to read, so unless you need to do something that requires you to use CALayers, I'd stick with UIKit objects.

GPUImage performance

I'm using GPUImage to apply filters and chain filters on the images. I'm using UISlider to change the value of the filters and applying the filters continuously on the image as slider's values changes. So that user can see what's the output as he changes the value.
This is causing very slow processing and sometimes UI hangs or event app crashes on receiving low memory warning.
How can I achieve fast filter implementation using GPUImage. I have seem some Apps which are applying filters on the go and their UI doesn't even hang for second.
Thanks,
Here's the sample code which I'm using as slider's value changes.
- (IBAction) foregroundSliderValueChanged:(id)sender{
float value = ([(UISlider *)sender maximumValue] - [(UISlider *)sender value]) + [(UISlider *)sender minimumValue];
[(GPUImageVignetteFilter *)self.filter setVignetteEnd:value];
GPUImagePicture *filteredImage = [[GPUImagePicture alloc]initWithImage:_image];
[filteredImage addTarget:self.filter];
[filteredImage processImage];
self.imageView.image = [self.filter imageFromCurrentlyProcessedOutputWithOrientation:_image.imageOrientation];
}
You haven't specified how you set up your filter chain, what filters you use, or how you're doing your updates, so it's hard to provide all but the most generic advice. Still, here goes:
If processing an image for display to the screen, never use a UIImageView. Converting to and from a UIImage is an extremely slow process, and one that should never be used for live updates of anything. Instead, go GPUImagePicture -> filters -> GPUImageView. This keeps the image on the GPU and is far more efficient, processing- and memory-wise.
Only process as many pixels as you actually will be displaying. Use -forceProcessingAtSize: or -forceProcessingAtSizeRespectingAspectRatio: on the first filter in your chain to reduce its resolution to the output resolution of your GPUImageView. This will cause your filters to operate on image frames that are usually many times smaller than your full-resolution source image. There's no reason to process pixels you'll never see. You can then pass in a 0 size to these same methods when you need to finally capture the full-resolution image to disk.
Find more efficient ways of setting up your filter chain. If you have a common set of simple operations that you apply over and over to your images, think about creating a custom shader that combines these operations, as appropriate. Expensive operations also sometimes have a cheaper substitute, like how I use a downsampling-then-upsampling pass for GPUImageiOSBlur to use a much smaller blur radius than I would with a stock GPUImageGaussianBlur.

UIView background - colorWithPatterOfImage or UIImageView?

I'm wondering, what're the differences or, more importantly, what's better for performance - for a very fast redrawing:
1)myView/myLayer.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"myImage.png"]]; for about 40 to 90 views/layers
or
Just using about 40 to 90 UIImageViews ?
What's better for fast redrawing and what's going under the hood, so I can understand which one to pick?
Thanks
This previous post might help: colorWithPatternImage Vs. UIImageView
As well as this: http://cocoaintheshell.com/2011/01/colorwithpatternimage-memory-usage/
Consensus seems to be that colorWithPatternImage uses a lot of memory and to use UIImageView.

Effectively scaling multiple images in iOS

I have 15 images being displayed on a single view. I need to scale the images based on the user's voice (the louder they speak the larger the images need to scale). At the moment I am using averagePowerForChannel on the AVAudioRecorder and frequently sampling the audio to scaling all the images appropriately. The code I'm using to do the scaling looks something like this:
- (void)scaleImages:(float)scalingFactor {
for (UIView *imageHolder in self.imageView.subviews) {
UIView *image = [imageHolder.subviews objectAtIndex:0];
image.transform = CGAffineTransformMakeScale(scalingFactor, scalingFactor);
image.hidden = scalingFactor <= 0.0f;
}
}
This works fine when I have a single image, but when I do this for all 15 images it becomes incredibly laggy and unresponsive. I have tried several different options - sampling less frequently, normalizing the sampling output, etc but nothing seems to make a difference.
How would I optimize this?
You might want to try the GPUImage framework . It uses the GPU to accelerate Core Image transforms .
https://github.com/BradLarson/GPUImage

CGContextDrawImage is EXTREMELY slow after large UIImage drawn into it

It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:#"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
Exactly as typewriter said, you have to use the same pixel format. Using one with AlphaNone gives a speed boost. But even more important, argb32_image call in my case did numerous calls converting pixels from ARGB to BGRA. So the best bitmapInfo value for me was (at the time; there is a probability that Apple can change something here in the future):
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
CGContextDrawImage may work faster if rectangle argument was made integral (by CGRectIntegral). Seems to have more effect when image is scaled by factor close to 1.
Using layers actually slowed down things for me. Probably something was changed since 2011 in some internal calls.
Setting interpolation quality for the context lower than default (by CGContextSetInterpolationQuality) is important. I would recommend using (IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow). Macros IS_RETINA_DISPLAY is taken from here.
Make sure you get CGColorSpaceRef from CGColorSpaceCreateDeviceRGB() or the like when creating context. Some performance issues were reported for getting fixed color space instead of requesting that of the device.
Inheriting view class from UIImageView and simply setting self.image to the image created from context proved useful to me. However, read about using UIImageView first if you want to do this, for it requires some changes in code logic (because drawRect: isn't called anymore).
And if you can avoid scaling your image at the time of actual drawing, try to do so. Drawing non-scaled image is significantly faster - unfortunately, for me that was not an option.

Resources