Memory keeps building up and not releasing with GPUImage. If I change the filter on a UIImage, the old filter memory is never released
My code:
Have a UIImageView as a class property:
#property(nonatomic,strong) UIImageView *capturedImageView;
I apply the filter like so
GPUImageFilter filter = [[GPUImageGrayscaleFilter alloc] init];
UIImage *filteredImage = [filter imageByFilteringImage:[UIImage imageWithCGImage:myImage.CGImage]];
[filter removeAllTargets];
[filter endProcessing];
I then set the imageView.image property to the filtered image
self.capturedImageView.image = filteredImage;
Now at any time I might change the filter (based on what a user selects (ie an instagram like filter list)). When I do change the filter I just use the code above (but alloc a different filter)
I don't understand why memory keeps building up? I tried setting my imageview.image property to nil and GPIFilter to nil but that does nothing
Related
I am working on some image processing in my app. Taking live video and adding an image onto of it to use it as an overlay. Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended. Essentially I want to make the following code use the GPU instead of the CPU.
- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {
CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];
// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:#"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];
CIFilter * blendFilter = [CIFilter filterWithName:#"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:#"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:#"inputBackgroundImage"];
CIImage * blendOutput = [blendFilter outputImage];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = #{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];
CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);
return outputImage;}
Suggestions in order:
do you really need to composite the two images? Is an AVCaptureVideoPreviewLayer with a UIImageView on top insufficient? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.
if not then first port of call should be CoreImage — it wraps up GPU image operations into a relatively easy Swift/Objective-C package. There is a simple composition filter so all you need to do is make the two things into CIImages and use -imageByApplyingTransform: to adjust the ghost.
failing both of those, then you're looking at an OpenGL solution. You specifically want to use CVOpenGLESTextureCache to push core video frames to the GPU, and the ghost will simply permanently live there. Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect to save yourself from needing to know GLSL if you don't already. All you should need to do is package up some vertices and make a drawing call.
The biggest performance issue is that each frame you create EAGLContext and CIContext. This needs to be done only once outside of your processUsingCoreImage method.
Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :
[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];
I'm trying to apply a blend filters to 2 images.
I've recently updated GPUImage to the last version.
To make things simple I've modified the example SimpleImageFilter.
Here is the code:
UIImage * image1 = [UIImage imageNamed:#"PGSImage_0000.jpg"];
UIImage * image2 = [UIImage imageNamed:#"PGSImage_0001.jpg"];
twoinputFilter = [[GPUImageColorBurnBlendFilter alloc] init];
sourcePicture1 = [[GPUImagePicture alloc] initWithImage:image1 ];
sourcePicture2 = [[GPUImagePicture alloc] initWithImage:image2 ];
[sourcePicture1 addTarget:twoinputFilter];
[sourcePicture1 processImage];
[sourcePicture2 addTarget:twoinputFilter];
[sourcePicture2 processImage];
UIImage * image = [twoinputFilter imageFromCurrentFramebuffer];
The image returned is nil.Applying some breakpoints I can see that the filter fails inside the method - (CGImageRef)newCGImageFromCurrentlyProcessedOutput the problem is that the framebufferForOutput is nil.I'm using simulator.
I don't get why it isn't working.
It seems that I was missing this command, as written in the documentation for still image processing:
Note that for a manual capture of an image from a filter, you need to
set -useNextFrameForImageCapture in order to tell the filter that
you'll be needing to capture from it later. By default, GPUImage
reuses framebuffers within filters to conserve memory, so if you need
to hold on to a filter's framebuffer for manual image capture, you
need to let it know ahead of time.
[twoinputFilter useNextFrameForImageCapture];
I would like to create a GPUImageView to display a filter in real time (as opposed to keep reading imageFromCurrentlyProcessedOutput)
Is it possible to use GPUImage's GPUImageChromaKeyBlendFilter with a still source image automatically updating a GPUImageView?
Here is my code reading this into a UIImage;
UIImage *inputImage = [UIImage imageNamed:#"1.JPG"];
UIImage *backgroundImage = [UIImage imageNamed:#"2.JPG"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageChromaKeyBlendFilter *stillImageFilter = [[GPUImageChromaKeyBlendFilter alloc] init];
[stillImageFilter setThresholdSensitivity:0.5];
[stillImageFilter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[stillImageSource addTarget:stillImageFilter];
[stillImageSource processImage];
UIImage *currentFilteredVideoFrame = [stillImageFilter imageByFilteringImage: backgroundImage ];
Everything I have tried so far requires you to add the 'backgroundImage' as a target to the filter (as you would if you were using the StillCamera). If you add the backgroundImage as a target, GPUImage just uses this new images as it's base image.
Can anyone help?
Thanks,
Don't use -imageByFilteringImage: with a two-input filter, like blends. It's a convenience method to quickly set up a small filter chain based on a UIImage and grab a UIImage out. You're not going to want it for something targeting a GPUImageView, anyway.
For the chroma key blend, you'll need to target your input image (the one with the color to be replaced) and background image to the blend, in that order using -addTarget, with GPUImagePicture instances for both. You then target your blend to the GPUImageView.
One note, you'll need to maintain strong references to your GPUImagePictures past the setup method, if you want to keep updating the filter after this point, so you may need to make them instance variables on your controller class.
Once you've set things up in this way, the result will go to your GPUImageView. Every time you call -processImage on one of the two images, the display in your GPUImageView will be updated. Therefore, you can call that after every change in filter settings, like if you had a slider to update filter values, and the image will be updated in realtime.
I creating set of GPUImageToneCurveFilter and storing in an array.
First i am creating preview video view for all filters using GPUImageVideoCamera after selecting any filter i am trying to add that filter detail view (GPUImageStillCamera ). But i am getting black screen for this.
If i recreate new filter instead of reusing and then add to GPUImageStillCamera it work fine.
any solution to this.
Preview view creating code:
-(void)setUpUI{
self.videoView.fillMode = kGPUImageFillModePreserveAspectRatioAndFill;
}
-(void)addFilter:(id)filter
{
// For thumbnails smaller than the input video size, we currently need to make them render at a smaller size.
// This is to avoid wasting processing time on larger frames than will be displayed.
// You'll need to use -forceProcessingAtSize: with a zero size to re-enable full frame processing of video.
self.filter = filter;
[filter forceProcessingAtSize:self.videoView.sizeInPixels];
[[CameraProvider sharedProvider] addTarget:filter];
[filter addTarget:self.videoView];
[[CameraProvider sharedProvider] startCameraCapture];
self.titleLabel.text = [filter fliterName];
}
-(void)stopCamera
{
[self.filter removeAllTargets];
[[CameraProvider sharedProvider] removeTarget:self.filter];
[[CameraProvider sharedProvider] stopCameraCapture];
}
-(IBAction)selectionDone:(id)sender {
[[CameraProvider sharedProvider] removeInputsAndOutputs];
self.selectedFilter(self.filter);
}
// Adding to detail view (GPUImageStillCamera0:
- (void)didSelectFilter:(id)newfilter;
{
NSLog(#"fliter");
// newfilter = [[GPUImageToneCurveFilter alloc] initWithACV:#"california-gold-rush.acv"];
[newfilter prepareForImageCapture];
[stillCamera addTarget:newfilter];
[newfilter addTarget:self.imageView];
[stillCamera startCameraCapture];
}
If i recreate new filter instead of reusing and then add to GPUImageStillCamera it work fine.
I hate to state the obvious, but the solution is to recreate the filter when you need it rather than trying to reuse it.
What you want from that array is "a way of getting a filter object given an index". There are many ways of getting that filter object. One of them is to preallocate an array and index into the array. Another is to write a function that, given an index, returns a newly-created object of the same type as you would have retrieved from the array. Instead of having an array of filters, use an array of factories for filters.
So I have a UISlider that is changing a HUE using CIFilter.
Its insanely slow because I'm affecting the base image while the uislider is in use.
Any one have suggestions on how to do this more efficiently?
// UI SLIDER
-(IBAction)changeSlider:(id)sender {
[self doHueAdjustFilterWithBaseImage:currentSticker.image
hueAdjust:[(UISlider *)sender value]];
}
//Change HUE
-(void)doHueAdjustFilterWithBaseImage:(UIImage*)baseImage hueAdjust:(CGFloat)hueAdjust {
CIImage *inputImage = [[CIImage alloc] initWithImage:baseImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIHueAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:hueAdjust] forKey:#"inputAngle"];
//NSLog(#"%#",controlsFilter.attributes);
CIImage *displayImage = controlsFilter.outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil){
NSLog(#"Display NADA");
} else {
NSLog(#"RETURN Image");
currentSticker.image = [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}
displayImage = nil;
inputImage = nil;
controlsFilter = nil;
}
You can set UISlider's continuous property to NO. So that your changeSlider only gets called when user releaseS the finger. Here is Apple's doc?
Your problem here is that you keep redeclaring the context. Put the context as a property and initialize it once in your initializer, then use it over and over and see a massive performance gain.
I guess rather changing the awesome behaviour you want, a frequent update while sliding is good! and it's best for user experience. So Do not change that behaviour but rather work on your algorithm to achieve greater optimisation. Check this previously asked question which has some good tips.
How to implement fast image filters on iOS platform
My go would be, keep updating slider property but try to code something which makes the update when slider hits a number which is a whole decimal(funny term, and its probably wrong too) I mean detect when the slider passes thru 10.000, 20.000, 30.000. Only then update the image rather updating for every single point. Hope this makes sense.
EDIT:
Make your,
Input mage and filter variables as iVar. and check they have been already allocated then reuse it rather then creating every single time.