Effectively scaling multiple images in iOS - ios

I have 15 images being displayed on a single view. I need to scale the images based on the user's voice (the louder they speak the larger the images need to scale). At the moment I am using averagePowerForChannel on the AVAudioRecorder and frequently sampling the audio to scaling all the images appropriately. The code I'm using to do the scaling looks something like this:
- (void)scaleImages:(float)scalingFactor {
for (UIView *imageHolder in self.imageView.subviews) {
UIView *image = [imageHolder.subviews objectAtIndex:0];
image.transform = CGAffineTransformMakeScale(scalingFactor, scalingFactor);
image.hidden = scalingFactor <= 0.0f;
}
}
This works fine when I have a single image, but when I do this for all 15 images it becomes incredibly laggy and unresponsive. I have tried several different options - sampling less frequently, normalizing the sampling output, etc but nothing seems to make a difference.
How would I optimize this?

You might want to try the GPUImage framework . It uses the GPU to accelerate Core Image transforms .
https://github.com/BradLarson/GPUImage

Related

How can I tile the background with UIImageViews with code efficiently?

I'm working in Xcode 6 on tiling the iPhone background with many UIImageViews and I'd like to know if this is the most efficient solution.
I know one simple solution would be to create image views in the storyboard and cover the entire screen with them manually. I'd like to do it with code. Here's the code I have currently (5x5 is an okay size since I can scale it up or down to fill the screen with bigger or larger images):
CGRect tiles[5][5];
UIImage *tileImages[5][5];
UIImageView *tileViews[5][5];
for(int i=0;i<5;i++)
{
for(int j=0;j<5;j++)
{
tiles[i][j] = CGRectMake(50*i,50*j,50,50);
tileImages[i][j] = [UIImage imageNamed:#"tile.png"];
tileViews[i][j] = [[UIImageView alloc] initWithFrame:tiles[i][j]];
tileViews[i][j].image = tileImages[i][j];
[self.view addSubview:tileViews[i][j]];
}
}
Currently all the images are the same, but in the long haul I'm going to make them dependent on various factors.
I have read around and I know that UIImageViews are finicky. Is this the proper and memory efficient way to tile a background with UIImageViews? Is there a better way to do this? Can I manually go in after the tiles are initialized and change an image it's displaying and have it update in real time with just this?
tileView[1][2].image = [UIImage imageNamed:#"anotherTile.png"];
Thanks in advance, I just finished a basic 6-week course in IOS programming at my college so I still find myself trying to appease the Objective C Gods occasionally.
I guess my doubt would be why you need them to be image views. Drawing an image in a view or layer is so easy, and arranging views or layers in a grid is so easy; what are the image views for, when you are not really using or needing any of the power of image views?
I have several apps that display tiled images - one shows 99 bottles in a grid, one shows a grid of tile "pieces" that the user taps in matched pairs to dismiss them, one shows a grid of rectangular puzzle pieces that the user slides to swap them and get them into the right order, and one shows a grid of "cards" that the user taps in triplets to match them - and in none of those cases do I use image views. In one, they are CALayers; in the other cases they are custom UIView subclasses; but in no case do I use UIImageViews.
For something as simple as a grid of images, using UIImageViews seems, as you seem to imply, overkill. If the reason you have resorted to UIImageViews is that you don't know how to make a UIView or a CALayer draw its content, I'd say, stop and learn how to do that before you go any further.

How to layer a distinct alpha channel animation on top of video?

I'm developing an iphone app and am having issues with the AVFoundation API; I'm used to lots of image manipulation, and just kind of figured I would have access to an image buffer; but the video API is quite different.
I want to take a 30 frame/sec animation which is generated as PNGs with transparency channel, and overlay it onto an arbitrary number of video clips composited inside of a AVMutableComposition.
I figured that the AVMutableVideoComposition would be a good way to go about it; but as it turns out, the animation tool, AVVideoCompositionCoreAnimationTool, requires a special kind of CALayer animation. It supports an animation with basic stuff like a spatial transform, scaling, fading, etc -- but my animation is already complete as a series of PNGS.
Is this possible with AVFoundation? If so, what is the recommended process?
I think you should work with UIImageView and animationImages:
UIImageView *anImageView = [[UIImageView alloc] initWithFrame:frame];
NSMutableArray *animationImages = [NSMutableArray array];
for (int i = 0; i < 500; i++) {
[animationImages addObject:[UIImage imageNamed:[NSString stringWithFormat:#"image%d", i]]];
}
anImageView.animationImages = animationImages;
anImageView.animationDuration = 500/30;
I would use AVVideoCompositing protocol along with AVAsynchronousVideoCompositionRequest. Use [AVAsynchronousVideoCompositionRequest sourceFrameByTrackID:] to get the CVPixelBufferRef for the video frame. Then Create a CIImage with the appropriate png based on the timing you want. Then render the video frame onto a GL_TEXTURE, render the CIImage to a GL_TEXTURE, then draw these all into your destination CVPixelBufferRef and you should get the effect you are looking for.
Something Like:
CVPixelBufferRef foregroundPixelBuffer;
CIImage *appropriatePNG = [CIImage imageWithURL:pngURL];
[someCIContext render:appropriatePNG toCVPixelBuffer:foregroundPixelBuffer]
CVPixelBufferRef backgroundPixelBuffer = [asynchronousVideoRequest sourceFrameByTrackID:theTrackID];
//... GL Code for rendering using CVOpenGLESTextureCacheRef
You will need to comp each set of input images (background and foreground) down to a single pixel buffer and then encode those buffers into a H.264 video a frame at a time. Just be aware that this will not be super fast since there is a lot of memory writing and encoding time to create the H.264. You can have a look at AVRender to see a working example of the approach described here. If you would like to roll your own impl then take a look at this tutorial which includes source code that can help you get started.

GPUImage performance

I'm using GPUImage to apply filters and chain filters on the images. I'm using UISlider to change the value of the filters and applying the filters continuously on the image as slider's values changes. So that user can see what's the output as he changes the value.
This is causing very slow processing and sometimes UI hangs or event app crashes on receiving low memory warning.
How can I achieve fast filter implementation using GPUImage. I have seem some Apps which are applying filters on the go and their UI doesn't even hang for second.
Thanks,
Here's the sample code which I'm using as slider's value changes.
- (IBAction) foregroundSliderValueChanged:(id)sender{
float value = ([(UISlider *)sender maximumValue] - [(UISlider *)sender value]) + [(UISlider *)sender minimumValue];
[(GPUImageVignetteFilter *)self.filter setVignetteEnd:value];
GPUImagePicture *filteredImage = [[GPUImagePicture alloc]initWithImage:_image];
[filteredImage addTarget:self.filter];
[filteredImage processImage];
self.imageView.image = [self.filter imageFromCurrentlyProcessedOutputWithOrientation:_image.imageOrientation];
}
You haven't specified how you set up your filter chain, what filters you use, or how you're doing your updates, so it's hard to provide all but the most generic advice. Still, here goes:
If processing an image for display to the screen, never use a UIImageView. Converting to and from a UIImage is an extremely slow process, and one that should never be used for live updates of anything. Instead, go GPUImagePicture -> filters -> GPUImageView. This keeps the image on the GPU and is far more efficient, processing- and memory-wise.
Only process as many pixels as you actually will be displaying. Use -forceProcessingAtSize: or -forceProcessingAtSizeRespectingAspectRatio: on the first filter in your chain to reduce its resolution to the output resolution of your GPUImageView. This will cause your filters to operate on image frames that are usually many times smaller than your full-resolution source image. There's no reason to process pixels you'll never see. You can then pass in a 0 size to these same methods when you need to finally capture the full-resolution image to disk.
Find more efficient ways of setting up your filter chain. If you have a common set of simple operations that you apply over and over to your images, think about creating a custom shader that combines these operations, as appropriate. Expensive operations also sometimes have a cheaper substitute, like how I use a downsampling-then-upsampling pass for GPUImageiOSBlur to use a much smaller blur radius than I would with a stock GPUImageGaussianBlur.

Painfully slow software vectors, particularly CoreGraphics vs. OpenGL

I'm working on an iOS app that requires drawing Bézier curves in real time in response to the user's input. At first, I decided to try using CoreGraphics, which has a fantastic vector drawing API. However, I quickly discovered that performance was painfully, excruciatingly slow, to the point where the framerate started dropping severely with just ONE curve on my retina iPad. (Admittedly, this was a quick test with inefficient code. For example, the curve was getting redrawn every frame. But surely today's computers are fast enough to handle drawing a simple curve every 1/60th of a second, right?!)
After this experiment, I switched to OpenGL and the MonkVG library, and I couldn't be happier. I can now render HUNDREDS of curves simultaneously without any framerate drop, with only a minimal impact on fidelity (for my use case).
Is it possible that I misused CoreGraphics somehow (to the point where it was several orders of magnitude slower than the OpenGL solution), or is performance really that terrible? My hunch is that the problem lies with CoreGraphics, based on the number of StackOverflow/forum questions and answers regarding CG performance. (I've seen several people state that CG isn't meant to go in a run loop, and that it should only be used for infrequent rendering.) Why is this the case, technically speaking?
If CoreGraphics really is that slow, how on earth does Safari work so smoothly? I was under the impression that Safari isn't hardware-accelerated, and yet it has to display hundreds (if not thousands) of vector characters simultaneously without dropping any frames.
More generally, how do applications with heavy vector use (browsers, Illustrator, etc.) stay so fast without hardware acceleration? (As I understand it, many browsers and graphics suites now come with a hardware acceleration option, but it's often not turned on by default.)
UPDATE:
I have written a quick test app to more accurately measure performance. Below is the code for my custom CALayer subclass.
With NUM_PATHS set to 5 and NUM_POINTS set to 15 (5 curve segments per path), the code runs at 20fps in non-retina mode and 6fps in retina mode on my iPad 3. The profiler lists CGContextDrawPath as having 96% of the CPU time. Yes — obviously, I can optimize by limiting my redraw rect, but what if I really, truly needed full-screen vector animation at 60fps?
OpenGL eats this test for breakfast. How is it possible for vector drawing to be so incredibly slow?
#import "CGTLayer.h"
#implementation CGTLayer
- (id) init
{
self = [super init];
if (self)
{
self.backgroundColor = [[UIColor grayColor] CGColor];
displayLink = [[CADisplayLink displayLinkWithTarget:self selector:#selector(updatePoints:)] retain];
[displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes];
initialized = false;
previousTime = 0;
frameTimer = 0;
}
return self;
}
- (void) updatePoints:(CADisplayLink*)displayLink
{
for (int i = 0; i < NUM_PATHS; i++)
{
for (int j = 0; j < NUM_POINTS; j++)
{
points[i][j] = CGPointMake(arc4random()%768, arc4random()%1024);
}
}
for (int i = 0; i < NUM_PATHS; i++)
{
if (initialized)
{
CGPathRelease(paths[i]);
}
paths[i] = CGPathCreateMutable();
CGPathMoveToPoint(paths[i], &CGAffineTransformIdentity, points[i][0].x, points[i][0].y);
for (int j = 0; j < NUM_POINTS; j += 3)
{
CGPathAddCurveToPoint(paths[i], &CGAffineTransformIdentity, points[i][j].x, points[i][j].y, points[i][j+1].x, points[i][j+1].y, points[i][j+2].x, points[i][j+2].y);
}
}
[self setNeedsDisplay];
initialized = YES;
double time = CACurrentMediaTime();
if (frameTimer % 30 == 0)
{
NSLog(#"FPS: %f\n", 1.0f/(time-previousTime));
}
previousTime = time;
frameTimer += 1;
}
- (void)drawInContext:(CGContextRef)ctx
{
// self.contentsScale = [[UIScreen mainScreen] scale];
if (initialized)
{
CGContextSetLineWidth(ctx, 10);
for (int i = 0; i < NUM_PATHS; i++)
{
UIColor* randomColor = [UIColor colorWithRed:(arc4random()%RAND_MAX/((float)RAND_MAX)) green:(arc4random()%RAND_MAX/((float)RAND_MAX)) blue:(arc4random()%RAND_MAX/((float)RAND_MAX)) alpha:1];
CGContextSetStrokeColorWithColor(ctx, randomColor.CGColor);
CGContextAddPath(ctx, paths[i]);
CGContextStrokePath(ctx);
}
}
}
#end
You really should not compare Core Graphics drawing with OpenGL, you are comparing completely different features for very different purposes.
In terms of image quality, Core Graphics and Quartz are going to be far superior than OpenGL with less effort. The Core Graphics framework is designed for optimal appearance , naturally antialiased lines and curves and a polish associated with Apple UIs. But this image quality comes at a price: rendering speed.
OpenGL on the other hand is designed with speed as a priority. High performance, fast drawing is hard to beat with OpenGL. But this speed comes at a cost: It is much harder to get smooth and polished graphics with OpenGL. There are many different strategies to do something as "simple" as antialiasing in OpenGL, something which is more easily handled by Quartz/Core Graphics.
First, see Why is UIBezierPath faster than Core Graphics path? and make sure you're configuring your path optimally. By default, CGContext adds a lot of "pretty" options to paths that can add a lot of overhead. If you turn these off, you will likely find dramatic speed improvements.
The next problem I've found with Core Graphics Bézier curves is when you have many components in a single curve (I was seeing problems when I went over about 3000-5000 elements). I found very surprising amounts of time spent in CGPathAdd.... Reducing the number of elements in your path can be a major win. From my talks with the Core Graphics team last year, this may have been a bug in Core Graphics and may have been fixed. I haven't re-tested.
EDIT: I'm seeing 18-20FPS in Retina on an iPad 3 by making the following changes:
Move the CGContextStrokePath() outside the loop. You shouldn't stroke every path. You should stroke once at the end. This takes my test from ~8FPS to ~12FPS.
Turn off anti-aliasing (which is probably turned off by default in your OpenGL tests):
CGContextSetShouldAntialias(ctx, false);
That gets me to 18-20FPS (Retina) and up to around 40FPS non-Retina.
I don't know what you're seeing in OpenGL. Remember that Core Graphics is designed to make things beautiful; OpenGL is designed to make things fast. Core Graphics relies on OpenGL; so I would always expect well-written OpenGL code to be faster.
Disclaimer: I'm the author of MonkVG.
The biggest reason that MonkVG is so much faster then CoreGraphics is actually not so much that it is implemented with OpenGL ES as a render backing but because it "cheats" by tessellating the contours into polygons before any rendering is done. The contour tessellation is actually painfully slow, and if you were to dynamically generate contours you would see a big slowdown. The great benefit of an OpenGL backing (verse CoreGraphics using direct bitmap rendering) is that any transform such a translation, rotation or scaling does not force a complete re-tessellation of the contours -- it's essentially for "free".
Your slowdown is because of this line of code:
[self setNeedsDisplay];
You need to change this to:
[self setNeedsDisplayInRect:changedRect];
It's up to you to calculate what rectangle has changed every frame, but if you do this properly, you will likely see over an order of magnitude performance improvement with no other changes.

How to implement fast image filters on iOS platform

I am working on iOS application where user can apply a certain set of photo filters. Each filter is basically set of Photoshop actions with a specific parameters. This actions are:
Levels adjustment
Brightness / Contrast
Hue / Saturation
Single and multiple overlay
I've repeated all this actions in my code using arithmetic expressions looping through the all pixels in image. But when I run my app on iPhone 4, each filter takes about 3-4 sec to apply which is quite a few time for the user to wait. The image size is 640 x 640 px which is #2x of my view size because it's displayed on Retina display. I've found that my main problem is levels modifications which are calling the pow() C function each time I need to adjust the gamma. I am using floats not doubles of course because ARMv6 and ARMv7 are slow with doubles. Tried to enable and disable Thumb and got the same result.
Example of the simplest filter in my app which is runs pretty fast though (2 secs). The other filters includes more expressions and pow() calls thus making them slow.
https://gist.github.com/1156760
I've seen some solutions which are using Accelerate Framework vDSP matrix transformations for fast image modifications. I've also seen OpenGL ES solutions. I am not sure that they are capable of my needs. But probably it's just a matter of translating my set of changes into some good convolution matrix?
Any advice would be helpful.
Thanks,
Andrey.
For the filter in your example code, you could use a lookup table to make it much faster. I assume your input image is 8 bits per color and you are converting it to float before passing it to this function. For each color, this only gives 256 possible values and therefore only 256 possible output values. You could precompute these and store them in an array. This would avoid the pow() calculation and the bounds checking since you could factor them into the precomputation.
It would look something like this:
unsigned char table[256];
for(int i=0; i<256; i++) {
float tmp = pow((float)i/255.0f, 1.3f) * 255.0;
table[i] = tmp > 255 ? 255 : (unsigned char)tmp;
}
for(int i=0; i<length; ++i)
m_OriginalPixelBuf[i] = table[m_OriginalPixelBuf[i]];
In this case, you only have to perform pow() 256 times instead of 3*640*640 times. You would also avoid the branching caused by the bounds checking in your main image loop which can be costly. You would not have to convert to float either.
Even a faster way may be to precompute the table outside the program and just put the 256 coefficients in the code.
None of the operations you have listed there should require a convolution or even a matrix multiply. They are all pixel-wise operations, meaning that each output pixel only depends on the single corresponding input pixel. You would need to consider convolution for operations like blurring or sharpening where multiple input pixels affect a single output pixel.
If you're looking for the absolute fastest way to do this, you're going to want to use the GPU to handle the processing. It's built to do massively parallel operations, like color adjustments on single pixels.
As I've mentioned in other answers, I measured a 14X - 28X improvement in performance when running an image processing operation using OpenGL ES instead of on the CPU. You can use the Accelerate framework to do faster on-CPU image manipulation (I believe Apple claims around a ~4-5X boost is possible here), but it won't be as fast as OpenGL ES. It can be easier to implement, however, which is why I've sometimes used Accelerate for this over OpenGL ES.
iOS 5.0 also brings over Core Image from the desktop, which gives you a nice wrapper around these kind of on-GPU image adjustments. However, there are some limitations to the iOS Core Image implementation that you don't have when working with OpenGL ES 2.0 shaders directly.
I present an example of an OpenGL ES 2.0 shader image filter in my article here. The hardest part about doing this kind of processing is getting the OpenGL ES scaffolding set up. Using my sample application there, you should be able to extract that setup code and apply your own filters using it. To make this easier, I've created an open source framework called GPUImage that handles all of the OpenGL ES interaction for you. It has almost every filter you list above, and most run in under 2.5 ms for a 640x480 frame of video on an iPhone 4, so they're far faster than anything processed on the CPU.
As I said in a comment, you should post this question on the official Apple Developer Forums as well.
That aside, one real quick check: are you calling pow( ) or powf( )? Even if your data is float, calling pow( ) will get you the double-precision math library function, which is significantly slower than the single-precision variant powf( ) (and you'll have to pay for the extra conversions between float and double as well).
And a second check: have you profiled your filters in Instruments? Do you actually know where the execution time is being spent, or are you guessing?
I actually wanted to do all this myself but I found Silverberg's Image Filters. You could apply various instagram type image filters on your images. This so much better than other image filters out there - GLImageProcessing or Cimg.
Also check Instagram Image Filters on iPhone.
Hope this helps...
From iOS 5 upwards, you can use the Core Image filters to adjust a good range of image parameters.
To adjust contrast for example, this code works like a charm:
- (void)setImageContrast:(float)contrast forImageView:(UIImageView *)imageView {
if (contrast > MIN_CONTRAST && contrast < MAX_CONTRAST) {
CIImage *inputImage = [[CIImage alloc] initWithImage:imageView.image];
CIFilter *exposureAdjustmentFilter = [CIFilter filterWithName:#"CIColorControls"];
[exposureAdjustmentFilter setDefaults];
[exposureAdjustmentFilter setValue:inputImage forKey:#"inputImage"];
[exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:contrast] forKey:#"inputContrast"]; //default = 1.00
// [exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:1.0f] forKey:#"inputSaturation"]; //default = 1.00
// [exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:0.0f] forKey:#"inputBrightness"];
CIImage *outputImage = [exposureAdjustmentFilter valueForKey:#"outputImage"];
CIContext *context = [CIContext contextWithOptions:nil];
imageView.image = [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];
}
}
N.B. Default value for contrast is 1.0 (maximum value suggested is 4.0).
Also, contrast is calculated here on the imageView's image, so calling this method repeatedly will cumulate the contrast. Meaning, if you call this method with contrast value 2.0 first and then again with contrast value 3.0, you will get the original image with contrast value increased by 6.0 (2.0 * 3.0) - not 5.0.
Check the Apple documentation for more filters and parameters.
To list all available filters and parameters in code, just run this loop:
NSArray* filters = [CIFilter filterNamesInCategories:nil];
for (NSString* filterName in filters)
{
NSLog(#"Filter: %#", filterName);
NSLog(#"Parameters: %#", [[CIFilter filterWithName:filterName] attributes]);
}
This is an old thread, but I got to it from another link on SO, so people still read it.
With iOS 5, Apple added support for Core Image, and a decent number of Core image filters. I'm pretty sure all the ones the OP mentioned are available
Core Image uses OpenGL shaders under the covers, so it's really fast. It's much easier to use than OpenGL however. If you aren't already working in OpenGL, and just want to apply filters to CGImage or UIIMage objects, Core Image filters are the way to go.

Resources