How to render an image with effect faster with UIKit - ios

I'm making an iOS app which there's a process to switch a lot of pictures with several UIImageViews (a loop to set image property of a UIImageView with a bunch of images). And sometimes some of the images needs some graphic effect, say multiplication.
The easiest way is to use a CIFilter to do this thing but the problem is that CALayer on iOS doesn't support "filters" property, so you need to apply the effect to the images before you set "image" property. But this is really too slow when you refresh the screen very frequently.
So next I tried to use Core Graphics directly to do the multiplication with UIGraphics context and kCGBlendModeMultiply. This is really much faster than using a CIFilter, but since you have to apply the multiplication before rendering the image, you can still feel the program runs slower while trying to render images with multiplication effect than rendering normal images.
My guess is that the fundamental problem of these 2 approaches is that you have to process the effect to the images with GPU, and then get the result image with CPU, then finally render the result image with GPU, which means the data transfer between CPU and GPU wasted a lot of time, so I then tried to change the super class from UIImageView to UIView and implement the CGGraphics context code to drawRect method, then when I set the "image" property I call setNeedsDisplay method in didSet. But this doesn't work so well... actually every time it calls setNeedsDisplay the program becomes much more slow that even slower than using a CIFilter, probably because there are several views displaying.
I guess that probably I can fix this problem with OpenGL but I'm wondering if I can solve this problem with UIKit only?

As far as I understand you have to make the same changes to different images. So time of initial initialization is not critical for you but each image should be processed as soon as possible. First of all it is critical to generate new images in a background queue/thread.
There are two good ways to quickly process/generate images:
Use CIFilter from CoreImage
Use GPUImage library
If you used CoreImage check that you use CIFilter and CIContext properly. CIContext creation takes quite a lot of time but it could be SHARED between different CIFilters and images - so you should create CIContext only once! CIFilter could also be SHARED between different images, but since it is not thread safe you should have a separate CIFilter for each thread.
In my code I have the following:
+ (UIImage*)roundShadowImageForImage:(UIImage*)image {
static CIFilter *_filter;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^
{
NSLog(#"CIContext and CIFilter generating...");
_context = [CIContext contextWithOptions:#{ kCIContextUseSoftwareRenderer: #NO,
kCIContextWorkingColorSpace : [NSNull null] }];
CIImage *roundShadowImage = [CIImage imageWithCGImage:[[self class] roundShadowImage].CGImage];
CIImage *maskImage = [CIImage imageWithCGImage:[[self class] roundWhiteImage].CGImage];
_filter = [CIFilter filterWithName:#"CIBlendWithAlphaMask"
keysAndValues:
kCIInputBackgroundImageKey, roundShadowImage,
kCIInputMaskImageKey, maskImage, nil];
NSLog(#"CIContext and CIFilter are generated");
});
if (image == nil) {
return nil;
}
NSAssert(_filter, #"Error: CIFilter for cover images is not generated");
CGSize imageSize = CGSizeMake(image.size.width * image.scale, image.size.height * image.scale);
// CIContext and CIImage objects are immutable, which means each can be shared safely among threads
CIFilter *filterForThread = [_filter copy]; // CIFilter could not be shared between different threads.
CGAffineTransform imageTransform = CGAffineTransformIdentity;
if (!CGSizeEqualToSize(imageSize, coverSize)) {
NSLog(#"Cover image. Resizing image %# to required size %#", NSStringFromCGSize(imageSize), NSStringFromCGSize(coverSize));
CGFloat scaleFactor = MAX(coverSide / imageSize.width, coverSide / imageSize.height);
imageTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor);
}
imageTransform = CGAffineTransformTranslate(imageTransform, extraBorder, extraBorder);
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
ciImage = [ciImage imageByApplyingTransform:imageTransform];
if (image.hasAlpha) {
CIImage *ciWhiteImage = [CIImage imageWithCGImage:[self whiteImage].CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISourceOverCompositing"
keysAndValues:
kCIInputBackgroundImageKey, ciWhiteImage,
kCIInputImageKey, ciImage, nil];
[filterForThread setValue:filter.outputImage forKey:kCIInputImageKey];
}
else
{
[filterForThread setValue:ciImage forKey:kCIInputImageKey];
}
CIImage *outputCIImage = [filterForThread outputImage];
CGImageRef cgimg = [_context createCGImage:outputCIImage fromRect:[outputCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImage;
}
If you are still not satisfied with the speed try GPUImage It is a very good library, it is also very fast because it uses OpenGL for image generation.

Related

CIFilter output image is showing previous output image at random

I've found a very weird behaviour for the CIFilter with the CIGaussianBlur filter.
I am performing this method multiple times in fast succession for different images. SOMETIMES, the "last processed image" will be returned instead of the one I send in. For example, if I have the images:
A, B and C.
If I perform the blurring in fast succession, SOMETIMES I get a result like:
Blurred A, Blurred A, Blurred C
+(UIImage *)applyBlurToImageAtPath:(NSURL *)imageUrlPath
{
if (imageUrlPath == nil)
return nil;
//Tried to create new contexts each loop, and also tried to use a singleton context
// if(CIImageContextSingleton == nil)
// {
// CIImageContextSingleton = [CIContext contextWithOptions:nil];
// }
CIContext *context = [CIContext contextWithOptions:nil];//[Domain sharedInstance].CIImageContextSingleton;
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[gaussianBlurFilter setDefaults];
CIImage *inputImage = [CIImage imageWithContentsOfURL:imageUrlPath];
[gaussianBlurFilter setValue:inputImage forKey:kCIInputImageKey];
[gaussianBlurFilter setValue:#(1) forKey:kCIInputRadiusKey];
//Tried both these methods for getting the output image
CIImage *outputImage = [gaussianBlurFilter valueForKey:kCIOutputImageKey];
// CIImage *outputImage = [gaussianBlurFilter outputImage];
//If I'm doing this, the problem never occurs, so the problem is isolated to the gaussianBlurFilter:
//outputImage = inputImage;
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[inputImage extent]];
UIImage *resultImage = [UIImage imageWithCGImage:cgimg];
//Tried both with and without releasing the cgimg
CGImageRelease(cgimg);
return resultImage;
}
I've tried both in a loop, and by running the method when making a gesture or such and the same problem appears. (The image at the imageUrlPath is correct.) Also, see comments in the code for stuff I've tried.
Am I missing something? Is there some internal cache for the CIFilter? The method is always running on the main thread.
Based on the code given, and on the assumption that this method is always called on the main thread, you should be ok, but I do see some things that are ill-advised in the code:
Do not re-create your CIContext every time the method is called. I would suggest structuring a different way, not as a Singleton. Keep your CIContext around and re-use the same context when performing a lot of rendering.
If your CIFilter does not change, it is not necessary to re-create it every time either. If you are calling the method on the same thread, you can simply set the inputImage key on the filter. You will need to get a new outputImage from the filter whenever the input image is changed.
My guess is the problem is likely surrounding the Core Image Context rendering to the same underlying graphics environment(probably GPU rendering), but since you are constantly recreating a CIContext, perhaps there is something wonky going on.
Just a guess really, since I don't have code handy to test out myself. If you have a test project that demonstrates the problem, it would be easier to debug. Also- I'm still skeptical of the threading. The fact that it works without applying the blur does not necessarily prove that it is the blur causing the issue-- randomness is more likely to involve threading problems in my experience.

Using the GPU on iOS for Overlaying one image on another Image (Video Frame)

I am working on some image processing in my app. Taking live video and adding an image onto of it to use it as an overlay. Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended. Essentially I want to make the following code use the GPU instead of the CPU.
- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {
CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];
// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:#"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];
CIFilter * blendFilter = [CIFilter filterWithName:#"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:#"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:#"inputBackgroundImage"];
CIImage * blendOutput = [blendFilter outputImage];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = #{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];
CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);
return outputImage;}
Suggestions in order:
do you really need to composite the two images? Is an AVCaptureVideoPreviewLayer with a UIImageView on top insufficient? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.
if not then first port of call should be CoreImage — it wraps up GPU image operations into a relatively easy Swift/Objective-C package. There is a simple composition filter so all you need to do is make the two things into CIImages and use -imageByApplyingTransform: to adjust the ghost.
failing both of those, then you're looking at an OpenGL solution. You specifically want to use CVOpenGLESTextureCache to push core video frames to the GPU, and the ghost will simply permanently live there. Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect to save yourself from needing to know GLSL if you don't already. All you should need to do is package up some vertices and make a drawing call.
The biggest performance issue is that each frame you create EAGLContext and CIContext. This needs to be done only once outside of your processUsingCoreImage method.
Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :
[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];

UIImagePNGRepresentation returns nil after CIFilter

Running into a problem getting a PNG representation for a UIImage after having rotated it with CIAffineTransform. First, I have a category on UIImage that rotates an image 90 degrees clockwise. It seems to work correctly when I display the rotated image in a UIImageView.
-(UIImage *)cwRotatedRepresentation
{
// Not very precise, stop yelling at me.
CGAffineTransform xfrm=CGAffineTransformMakeRotation(-(6.28 / 4.0));
CIContext *context=[CIContext contextWithOptions:nil];
CIImage *inputImage=[CIImage imageWithCGImage:self.CGImage];
CIFilter *filter=[CIFilter filterWithName:#"CIAffineTransform"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSValue valueWithBytes:&xfrm objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIImage *result=[filter valueForKey:#"outputImage"];
CGImageRef cgImage=[context createCGImage:result fromRect:[inputImage extent]];
return [[UIImage alloc] initWithCIImage:result];
}
However, when I try to actually get a PNG for the newly rotated image, UIImagePNGRepresentation returns nil.
-(NSData *)getPNG
{
UIImage *myImg=[UIImage imageNamed:#"canada"];
myImg=[myImg cwRotatedRepresentation];
NSData *d=UIImagePNGRepresentation(myImg);
// d == nil :(
return d;
}
Is core image overwriting the PNG headers or something? Is there a way around this behavior, or a better means of achieving the desired result of a PNG representation of a UIImage rotated 90 degrees clockwise?
Not yelling, but -M_PI_4 will give you the constant you want with maximum precision :)
The only other thing that I see is you probably want to be using [result extent] instead of [inputImage extent] unless your image is known square.
Not sure how that would cause UIImagePNGRepresentation to fail though. One other thought... you create a CGImage and then use the CIImage in the UIImage, perhaps using initWithCGImage would give better results.

CIFilters inside Dispatch Queue causing memory issue in an ARC enabled project

I was running some CIFilters to blur graphics and it was very laggy so I wrapped my code in
dispatch_async(dispatch_get_main_queue(), ^{ /*...*/ });
Everything sped up and it ROCKED! Very fast processing, seamless blurring, great!
After about a minute though the app crashes with 250Mb memory (when I don't use dispatch I only use around 50Mb memory consistently because ARC manages it all)
I used ARC for my whole project, so I tried manually managing memory by releasing CIFilters inside my dispatch thread, but xCode keeps returning errors and won't let me manually release since I'm using ARC. At this point it would be an insane hassle to turn off ARC and go through every .m file and manually manage memory.
So how do I specifically manage memory inside dispatch for my CIFilters?
I tried wrapping it all in an #autoreleasepool { /*...*/ } (Which ARC strangely allows?) But it did not work. /:
Example code inside dispatch thread:
UIImage *theImage5 = imageViewImDealingWith.image;
CIContext *context5 = [CIContext contextWithOptions:nil];
CIImage *inputImage5 = [CIImage imageWithCGImage:theImage5.CGImage];
// setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter5 = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter5 setValue:inputImage5 forKey:kCIInputImageKey];
[filter5 setValue:[NSNumber numberWithFloat:5.00f] forKey:#"inputRadius"];
CIImage *result = [filter5 valueForKey:kCIOutputImageKey];
// CIGaussianBlur has a tendency to shrink the image a little,
// this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [context5 createCGImage:result fromRect:[inputImage5 extent]];
imageViewImDealingWith.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
context5 = nil;
inputImage5 = nil;
filter5 = nil;
result = nil;
do you release the cgImage?
CGImageRelease(cgImage);

handle memory warning while applying core image filter

I am using the followinf code for applying image filters. This is working fine on scaled down images.
But when I apply more than 2 filters on full resolution images, the app crashes. A memory warning is received.
When I open the 'allocations' instrument, I see that CFData(store) takes up most of the memory used by the program.
When I apply more than 2 filters on a full resolution image, the 'overall bytes' go upto 54MB. While the 'live bytes' don't seem to reach more than 12MB when I use my eyes on the numbers as such, but the spikes show that live bytes also reach upto this number and come back.
Where am i going wrong?
- (UIImage *)editImage:(UIImage *)imageToBeEdited tintValue:(float)tint
{
CIImage *image = [[CIImage alloc] initWithImage:imageToBeEdited];
NSLog(#"in edit Image:\ncheck image: %#\ncheck value:%f", image, tint);
[tintFilter setValue:image forKey:kCIInputImageKey];
[tintFilter setValue:[NSNumber numberWithFloat:tint] forKey:#"inputAngle"];
CIImage *outputImage = [tintFilter outputImage];
NSLog(#"check output image: %#", outputImage);
return [self completeEditingUsingOutputImage:outputImage];
}
- (UIImage *)completeEditingUsingOutputImage:(CIImage *)outputImage
{
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
NSLog(#"check cgimg: %#", cgimg);
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
NSLog(#"check newImge: %#", newImage);
CGImageRelease(cgimg);
return newImage;
}
Edit:
I also tried making cgimg as nil. Didn't help.
I tried putting context declaration and definition inside the 2nd function. Didn't help.
I tried to move declarations and definitions of filters inside the functions, didn't help.
AlsoCrash happens at
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
the cgimg i was making took most of the space in memory and was not getting released.
I observed that calling the filer with smaller values takes the CFData (store) memory value back to a samller value, thus avoiding the crash.
So I apply filter and after that call the same filter with image as 'nil'. This takes memory back to 484kb or something from 48 MB after applying all 4 filters.
Also, I am applying this filters on a background thread instead of the main thread. Applying on main thread again causes crash. Probably it doesn't get enough time to release the memory. I don't know.
But these things are working smoothly now.
// where is your input filter name like this:
[tintFilter setValue:image forKey:#"CIHueAdjust"];
// I think you have a mistake in outputImage.extent. You just write this
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage extent];

Resources