CIFilter with UISlider - ios

So I have a UISlider that is changing a HUE using CIFilter.
Its insanely slow because I'm affecting the base image while the uislider is in use.
Any one have suggestions on how to do this more efficiently?
// UI SLIDER
-(IBAction)changeSlider:(id)sender {
[self doHueAdjustFilterWithBaseImage:currentSticker.image
hueAdjust:[(UISlider *)sender value]];
}
//Change HUE
-(void)doHueAdjustFilterWithBaseImage:(UIImage*)baseImage hueAdjust:(CGFloat)hueAdjust {
CIImage *inputImage = [[CIImage alloc] initWithImage:baseImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIHueAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:hueAdjust] forKey:#"inputAngle"];
//NSLog(#"%#",controlsFilter.attributes);
CIImage *displayImage = controlsFilter.outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil){
NSLog(#"Display NADA");
} else {
NSLog(#"RETURN Image");
currentSticker.image = [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}
displayImage = nil;
inputImage = nil;
controlsFilter = nil;
}

You can set UISlider's continuous property to NO. So that your changeSlider only gets called when user releaseS the finger. Here is Apple's doc?

Your problem here is that you keep redeclaring the context. Put the context as a property and initialize it once in your initializer, then use it over and over and see a massive performance gain.

I guess rather changing the awesome behaviour you want, a frequent update while sliding is good! and it's best for user experience. So Do not change that behaviour but rather work on your algorithm to achieve greater optimisation. Check this previously asked question which has some good tips.
How to implement fast image filters on iOS platform
My go would be, keep updating slider property but try to code something which makes the update when slider hits a number which is a whole decimal(funny term, and its probably wrong too) I mean detect when the slider passes thru 10.000, 20.000, 30.000. Only then update the image rather updating for every single point. Hope this makes sense.
EDIT:
Make your,
Input mage and filter variables as iVar. and check they have been already allocated then reuse it rather then creating every single time.

Related

CIFilter output image is showing previous output image at random

I've found a very weird behaviour for the CIFilter with the CIGaussianBlur filter.
I am performing this method multiple times in fast succession for different images. SOMETIMES, the "last processed image" will be returned instead of the one I send in. For example, if I have the images:
A, B and C.
If I perform the blurring in fast succession, SOMETIMES I get a result like:
Blurred A, Blurred A, Blurred C
+(UIImage *)applyBlurToImageAtPath:(NSURL *)imageUrlPath
{
if (imageUrlPath == nil)
return nil;
//Tried to create new contexts each loop, and also tried to use a singleton context
// if(CIImageContextSingleton == nil)
// {
// CIImageContextSingleton = [CIContext contextWithOptions:nil];
// }
CIContext *context = [CIContext contextWithOptions:nil];//[Domain sharedInstance].CIImageContextSingleton;
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[gaussianBlurFilter setDefaults];
CIImage *inputImage = [CIImage imageWithContentsOfURL:imageUrlPath];
[gaussianBlurFilter setValue:inputImage forKey:kCIInputImageKey];
[gaussianBlurFilter setValue:#(1) forKey:kCIInputRadiusKey];
//Tried both these methods for getting the output image
CIImage *outputImage = [gaussianBlurFilter valueForKey:kCIOutputImageKey];
// CIImage *outputImage = [gaussianBlurFilter outputImage];
//If I'm doing this, the problem never occurs, so the problem is isolated to the gaussianBlurFilter:
//outputImage = inputImage;
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[inputImage extent]];
UIImage *resultImage = [UIImage imageWithCGImage:cgimg];
//Tried both with and without releasing the cgimg
CGImageRelease(cgimg);
return resultImage;
}
I've tried both in a loop, and by running the method when making a gesture or such and the same problem appears. (The image at the imageUrlPath is correct.) Also, see comments in the code for stuff I've tried.
Am I missing something? Is there some internal cache for the CIFilter? The method is always running on the main thread.
Based on the code given, and on the assumption that this method is always called on the main thread, you should be ok, but I do see some things that are ill-advised in the code:
Do not re-create your CIContext every time the method is called. I would suggest structuring a different way, not as a Singleton. Keep your CIContext around and re-use the same context when performing a lot of rendering.
If your CIFilter does not change, it is not necessary to re-create it every time either. If you are calling the method on the same thread, you can simply set the inputImage key on the filter. You will need to get a new outputImage from the filter whenever the input image is changed.
My guess is the problem is likely surrounding the Core Image Context rendering to the same underlying graphics environment(probably GPU rendering), but since you are constantly recreating a CIContext, perhaps there is something wonky going on.
Just a guess really, since I don't have code handy to test out myself. If you have a test project that demonstrates the problem, it would be easier to debug. Also- I'm still skeptical of the threading. The fact that it works without applying the blur does not necessarily prove that it is the blur causing the issue-- randomness is more likely to involve threading problems in my experience.

Using the GPU on iOS for Overlaying one image on another Image (Video Frame)

I am working on some image processing in my app. Taking live video and adding an image onto of it to use it as an overlay. Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended. Essentially I want to make the following code use the GPU instead of the CPU.
- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {
CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];
// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:#"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];
CIFilter * blendFilter = [CIFilter filterWithName:#"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:#"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:#"inputBackgroundImage"];
CIImage * blendOutput = [blendFilter outputImage];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = #{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];
CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);
return outputImage;}
Suggestions in order:
do you really need to composite the two images? Is an AVCaptureVideoPreviewLayer with a UIImageView on top insufficient? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.
if not then first port of call should be CoreImage — it wraps up GPU image operations into a relatively easy Swift/Objective-C package. There is a simple composition filter so all you need to do is make the two things into CIImages and use -imageByApplyingTransform: to adjust the ghost.
failing both of those, then you're looking at an OpenGL solution. You specifically want to use CVOpenGLESTextureCache to push core video frames to the GPU, and the ghost will simply permanently live there. Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect to save yourself from needing to know GLSL if you don't already. All you should need to do is package up some vertices and make a drawing call.
The biggest performance issue is that each frame you create EAGLContext and CIContext. This needs to be done only once outside of your processUsingCoreImage method.
Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :
[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];

GPUImage not releasing memory

Memory keeps building up and not releasing with GPUImage. If I change the filter on a UIImage, the old filter memory is never released
My code:
Have a UIImageView as a class property:
#property(nonatomic,strong) UIImageView *capturedImageView;
I apply the filter like so
GPUImageFilter filter = [[GPUImageGrayscaleFilter alloc] init];
UIImage *filteredImage = [filter imageByFilteringImage:[UIImage imageWithCGImage:myImage.CGImage]];
[filter removeAllTargets];
[filter endProcessing];
I then set the imageView.image property to the filtered image
self.capturedImageView.image = filteredImage;
Now at any time I might change the filter (based on what a user selects (ie an instagram like filter list)). When I do change the filter I just use the code above (but alloc a different filter)
I don't understand why memory keeps building up? I tried setting my imageview.image property to nil and GPIFilter to nil but that does nothing

CIFilters inside Dispatch Queue causing memory issue in an ARC enabled project

I was running some CIFilters to blur graphics and it was very laggy so I wrapped my code in
dispatch_async(dispatch_get_main_queue(), ^{ /*...*/ });
Everything sped up and it ROCKED! Very fast processing, seamless blurring, great!
After about a minute though the app crashes with 250Mb memory (when I don't use dispatch I only use around 50Mb memory consistently because ARC manages it all)
I used ARC for my whole project, so I tried manually managing memory by releasing CIFilters inside my dispatch thread, but xCode keeps returning errors and won't let me manually release since I'm using ARC. At this point it would be an insane hassle to turn off ARC and go through every .m file and manually manage memory.
So how do I specifically manage memory inside dispatch for my CIFilters?
I tried wrapping it all in an #autoreleasepool { /*...*/ } (Which ARC strangely allows?) But it did not work. /:
Example code inside dispatch thread:
UIImage *theImage5 = imageViewImDealingWith.image;
CIContext *context5 = [CIContext contextWithOptions:nil];
CIImage *inputImage5 = [CIImage imageWithCGImage:theImage5.CGImage];
// setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter5 = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter5 setValue:inputImage5 forKey:kCIInputImageKey];
[filter5 setValue:[NSNumber numberWithFloat:5.00f] forKey:#"inputRadius"];
CIImage *result = [filter5 valueForKey:kCIOutputImageKey];
// CIGaussianBlur has a tendency to shrink the image a little,
// this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [context5 createCGImage:result fromRect:[inputImage5 extent]];
imageViewImDealingWith.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
context5 = nil;
inputImage5 = nil;
filter5 = nil;
result = nil;
do you release the cgImage?
CGImageRelease(cgImage);

Reusing filter Causes black screen in GPUImage

I creating set of GPUImageToneCurveFilter and storing in an array.
First i am creating preview video view for all filters using GPUImageVideoCamera after selecting any filter i am trying to add that filter detail view (GPUImageStillCamera ). But i am getting black screen for this.
If i recreate new filter instead of reusing and then add to GPUImageStillCamera it work fine.
any solution to this.
Preview view creating code:
-(void)setUpUI{
self.videoView.fillMode = kGPUImageFillModePreserveAspectRatioAndFill;
}
-(void)addFilter:(id)filter
{
// For thumbnails smaller than the input video size, we currently need to make them render at a smaller size.
// This is to avoid wasting processing time on larger frames than will be displayed.
// You'll need to use -forceProcessingAtSize: with a zero size to re-enable full frame processing of video.
self.filter = filter;
[filter forceProcessingAtSize:self.videoView.sizeInPixels];
[[CameraProvider sharedProvider] addTarget:filter];
[filter addTarget:self.videoView];
[[CameraProvider sharedProvider] startCameraCapture];
self.titleLabel.text = [filter fliterName];
}
-(void)stopCamera
{
[self.filter removeAllTargets];
[[CameraProvider sharedProvider] removeTarget:self.filter];
[[CameraProvider sharedProvider] stopCameraCapture];
}
-(IBAction)selectionDone:(id)sender {
[[CameraProvider sharedProvider] removeInputsAndOutputs];
self.selectedFilter(self.filter);
}
// Adding to detail view (GPUImageStillCamera0:
- (void)didSelectFilter:(id)newfilter;
{
NSLog(#"fliter");
// newfilter = [[GPUImageToneCurveFilter alloc] initWithACV:#"california-gold-rush.acv"];
[newfilter prepareForImageCapture];
[stillCamera addTarget:newfilter];
[newfilter addTarget:self.imageView];
[stillCamera startCameraCapture];
}
If i recreate new filter instead of reusing and then add to GPUImageStillCamera it work fine.
I hate to state the obvious, but the solution is to recreate the filter when you need it rather than trying to reuse it.
What you want from that array is "a way of getting a filter object given an index". There are many ways of getting that filter object. One of them is to preallocate an array and index into the array. Another is to write a function that, given an index, returns a newly-created object of the same type as you would have retrieved from the array. Instead of having an array of filters, use an array of factories for filters.

Resources