I am using this code to generate 5 blur images using GPUImage and it seems like there is a memory accumulation of about 20MB which never gets released. Am I doing something wrong?
Here is my code:
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
GPUImageFastBlurFilter *blurFilter = [[GPUImageFastBlurFilter alloc] init];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:[image copy]];
[stillImageSource addTarget:blurFilter];
CGFloat maxBlur = 12.0;
for (int i=0; i < BLUR_STEPS; i++) {
if (!self.stopBlurOperation) { //stops blur operation on close
UIImageView *imageView;
if (i < self.blurredImageViews.count) {
imageView = (UIImageView *)self.blurredImageViews[i];
blurFilter.blurRadiusInPixels = maxBlur * (i+1) / BLUR_STEPS;
[blurFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *blurredImage = [blurFilter imageFromCurrentFramebuffer];
dispatch_async( dispatch_get_main_queue(), ^{
[imageView setImage:blurredImage];
});
blurredImage = nil;
}
}
}
[blurFilter removeAllTargets];
[stillImageSource removeAllTargets];
[GPUImageContext setActiveShaderProgram:nil];
blurFilter = nil;
stillImageSource = nil;
});
First, you appear to be using an old version of the framework, as GPUImageFastBlurFilter hasn't existed in there for months. The latest code from the repository uses a new framebuffer caching memory model, which is significantly more efficient in most applications.
Second, that's an extremely inefficient way to run multiple blur passes. Going to and from UIImages requires transferring data to and from the GPU, which is slow, and also requires redrawing using Core Graphis, which is even slower. Again, the code in the framework from the last several months has efficient means of generating large-radius blurs without any artifacting you may have seen before, making the above loop unnecessary.
Finally, you're running a tight loop in the above, and generating at least one autoreleased UIImage at each pass in the loop. Without an autorelease pool to drain in there somewhere, you're going to keep building those up in memory while that loop runs. However, as I said, you can remove all of this and not worry about the memory accumulation if you just update to the latest code in the repository.
Related
I am creating a filter with GPUImage. The image displays fine. I also have a UISlider that the user can slide to change alpha to the filter.
Here is how I setup my filter:
-(void)setupFirstFilter
{
imageWithOpacity = [ImageWithAlpha imageByApplyingAlpha:0.43f image:scaledImage];
pictureWithOpacity = [[GPUImagePicture alloc] initWithCGImage:[imageWithOpacity CGImage] smoothlyScaleOutput:YES];
originalPicture = [[GPUImagePicture alloc] initWithCGImage:[scaledImage CGImage] smoothlyScaleOutput:YES];
multiplyBlender = [[GPUImageMultiplyBlendFilter alloc] init];
[originalPicture addTarget:multiplyBlender];
[pictureWithOpacity addTarget:multiplyBlender];
UIImage *pinkImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:255.0f/255.0f green:185.0f/255.0f blue:200.0f/255.0f alpha:0.21f]];
pinkPicture = [[GPUImagePicture alloc] initWithCGImage:[pinkImage CGImage] smoothlyScaleOutput:YES];
overlayBlender = [[GPUImageOverlayBlendFilter alloc] init];
[multiplyBlender addTarget:overlayBlender];
[pinkPicture addTarget:overlayBlender];
UIImage *blueImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:185.0f/255.0f green:227.0f/255.0f blue:255.0f/255.0f alpha:0.21f]];
bluePicture = [[GPUImagePicture alloc] initWithCGImage:[blueImage CGImage] smoothlyScaleOutput:YES];
secondOverlayBlend = [[GPUImageOverlayBlendFilter alloc] init];
[overlayBlender addTarget:secondOverlayBlend];
[bluePicture addTarget:secondOverlayBlend];
[secondOverlayBlend addTarget:self.editImageView];
[originalPicture processImage];
[pictureWithOpacity processImage];
[pinkPicture processImage];
[bluePicture processImage];
}
And when the slider is changed this gets called:
-(void)sliderChanged:(id)sender
{
UISlider *slider = (UISlider*)sender;
double value = slider.value;
[originalPicture addTarget:multiplyBlender];
[pictureWithOpacity addTarget:multiplyBlender];
UIImage *pinkImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:255.0f/255.0f green:185.0f/255.0f blue:200.0f/255.0f alpha:value]];
pinkPicture = [[GPUImagePicture alloc] initWithCGImage:[pinkImage CGImage] smoothlyScaleOutput:NO];
[multiplyBlender addTarget:overlayBlender];
[pinkPicture addTarget:overlayBlender];
[overlayBlender addTarget:secondOverlayBlend];
[bluePicture addTarget:secondOverlayBlend];
[secondOverlayBlend addTarget:self.editImageView];
[originalPicture processImage];
[pictureWithOpacity processImage];
[pinkPicture processImage];
[bluePicture processImage];
}
The code above works fine. But the slide is slow and this is taking up to 170 MB or RAM. Before pressing to use filter it is around 30 MB RAM. How can I reduce the RAM by doing this filter?
I already reduce the image size.
Any help is greatly appreciated.
My first suggestion is to get rid of the single-color UIImages and their corresponding GPUImagePicture instances. Instead, use a GPUImageSolidColorGenerator, which does this solid-color generation entirely on the GPU. Make it output a small image size and that will be scaled up to fit your larger image. That will save on the memory required for your UIImages and avoid a costly draw / upload process.
Ultimately, however, I'd recommend making your own custom filter rather than running multiple blend steps using multiple input images. All that you're doing is applying a color modification to your source image, which can be done inside a single custom filter.
You could pass in your colors to a shader that applies two mix() operations, one for each color. The strength of each mix value would correspond to the alpha you're using in the above for each solid color. That would reduce this down to one input image and one processing step, rather than three input images and two steps. It would be faster and use significantly less memory.
I have implemented a filter tool mechanism which has many filters. Each filter contains 2-3 different filters i.e. i am using GPUImageFilterGroup for this. Now when i updated GPU Image Library for iOS 8 compatible it shows "Instance Method prepareForImageCapture not found" and app crashes.
I also tried to implement the following code
GPUImageFilterGroup *filter = [[GPUImageFilterGroup alloc] init];
GPUImageRGBFilter *stillImageFilter1 = [[GPUImageRGBFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter1.red = 0.2;
stillImageFilter1.green = 0.8;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter1];
GPUImageVignetteFilter *stillImageFilter2 = [[GPUImageVignetteFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter2.vignetteStart = 0.32;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter2];
[stillImageFilter1 addTarget:stillImageFilter2];
[(GPUImageFilterGroup *)filter setInitialFilters:[NSArray arrayWithObject:stillImageFilter1]];
[(GPUImageFilterGroup *)filter setTerminalFilter:stillImageFilter2];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
[stillImageSource addTarget:(GPUImageFilterGroup *)filter];
[stillImageSource processImage];
UIImage *img = [(GPUImageFilterGroup *)filter imageFromCurrentFramebuffer];
Its returning nil image. Can anyone tell me whats the correct way!!!
Thanks in advance.
First, that wasn't a crash for iOS 8. You haven't updated your copy of GPUImage in a while, and that method was removed months ago in an update unrelated to any iOS compatibility. The reasons for this are explained here and I'll once again quote the relevant paragraph:
This does add one slight wrinkle to the interface, though, and I've
changed some method names to make this clear to anyone updating their
code. Because framebuffers are now transient, if you want to capture
an image from one of them, you have to tag it before processing. You
do this by using the -useNextFrameForImageCapture method on the filter
to indicate that the next time an image is passed down the filter
chain, you're going to want to hold on to that framebuffer for a
little longer to grab an image out of it. -imageByFilteringImage:
automatically does this for you now, and I've added another
convenience method in -processImageUpToFilter:withCompletionHandler:
to do this in an asynchronous manner.
As you can see, -prepareForImageCapture was removed because it was useless in the new caching system.
The reason why your updated code is returning nil is that you've called -useNextFrameForImageCapture on the wrong filter. It needs to be called on your terminal filter in the group (stillImageFilter2) and only needs to be called once, right before you call -processImage. That signifies that this particular framebuffer needs to hang around long enough to have an image captured from it.
You honestly don't need a GPUImageFilterGroup in the above, as it only complicates your filter chaining.
I have implemented a group filter(GPUImageSepiaFilter, GPUImageExposureFilter, GPUImageSepiaFilter) for image editing. And I have one slider which is used to set custom "Exposure" (setExposure:) value. On the "didValueChanged" action method of slider, I am refreshing the image preview by calling "picture processImage". If I move the slider very fast or when repeatedly scrolling the slider, app crashes for sure due to memory issue.
- (void)viewDidLoad {
[super viewDidLoad];
self.originalPicture = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:#"IMG_0009.JPG"]];
self.filterGroup = [[GPUImageFilterGroup alloc] init];
GPUImageSepiaFilter *sepiaFilter = [[GPUImageSepiaFilter alloc] init];
[self.filterGroup addFilter:sepiaFilter];
GPUImageExposureFilter *pixellateFilter = [[GPUImageExposureFilter alloc] init];
[pixellateFilter setExposure:0.0f];
[self.filterGroup addFilter:pixellateFilter];
GPUImageSaturationFilter *saturation = [[GPUImageSaturationFilter alloc] init];
[self.filterGroup addFilter:saturation];
[sepiaFilter addTarget:pixellateFilter];
[pixellateFilter addTarget:saturation];
[self.filterGroup setInitialFilters:[NSArray arrayWithObjects:sepiaFilter, pixellateFilter,nil]];
[self.filterGroup setTerminalFilter:saturation];
[self.originalPicture addTarget:self.filterGroup];
GPUImageView *filterView = [[GPUImageView alloc] init];
self.view = filterView;
[self.filterGroup addTarget:filterView];
[self.originalPicture processImage];
[self.slider setMinimumTrackTintColor:[UIColor redColor]];
[self.slider setMaximumTrackTintColor:[UIColor greenColor]];
[self.view addSubview:self.slider];
}
- (IBAction)didChangeValue:(id)sender {
GPUImageExposureFilter *filter = (GPUImageExposureFilter *)[self.filterGroup filterAtIndex:1];
[filter setExposure:self.slider.value];
[self.originalPicture processImage];
}
Which is the best way to fix this? Or am I doing anything wrong?
Thanks,
Srinivas
Mine are just some advices about images and GPUImage:
The "disk" size image it doesn't represent the real image size, that's because it could have been compressed. The real image size, when it's decompressed in memory and the system ha to handle it is: height*width*n°channel*n°bit_for_each_channel
Images should be always loaded lazily, is useless have them around if you are not using them
Brad has made a huge change in its framework about framebuffer reuse that had a major improvement on how memory is handled, are you sure that you are using the last version on github?
have you tried to profile the app with allocation instruments, maybe the problem is somewhere else, with this tool you can see if the memory grows where you expect
imageNamed method caches images and even if they say that this memory will be evicted in memory pressure situation I never had the occasion to see that purge working
I'm not seeing anything wrong with your code in using GPUImage, but I would try to use smaller images (in pixel size) and first of all use allocations.
I have a scrollView in which i load images into from the net .I sometimes get memory warnings, which i assume are because i am doing something wrong with the images loader.
I am trying to fix little things, and i just wanted to show the code here, and hear maybe there are more things i can fix to get rid of this warnings.
So every time the scroller (iPad) has only 4/5 images that are : current page-3->current page+3.
This is how i load the images(every image has also a blur effect with Apple's classes) :
(should i allocated imageView every time? can i improve something here? )
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^
{
NSData *imdata2 = [NSData dataWithContentsOfURL:url];
dispatch_async(dispatch_get_main_queue(), ^
{
UIImage *theImage=[UIImage imageWithData:imdata2 scale:1];
UIImage *LightImage = [theImage applyLightEffect];
UIImage *scaledImage =[resizer resizeImageToWidth:[Globals sharedGlobals].imagesWidth WithImage:theImage];
CGRect viewSizeBack=CGRectMake(scroller.bounds.size.width*toPage , 0, scroller.bounds.size.width, scroller.bounds.size.height);
int x=[Globals sharedGlobals].pageMargins;
int y=([UIScreen mainScreen].bounds.size.height-scaledImage.size.height)/2;
CGRect viewSizeFront=CGRectMake(x , y, scaledImage.size.width,scaledImage.size.height);
UIImageView *backImageView=[[UIImageView alloc] initWithFrame:viewSizeBack];
UIImageView *frontImageView=[[UIImageView alloc] initWithFrame:viewSizeFront];
backImageView.layer.cornerRadius = 0.0;
backImageView.layer.masksToBounds = YES;
backImageView.image=LightImage;
frontImageView.layer.cornerRadius = 0.0;
frontImageView.layer.masksToBounds = YES;
frontImageView.image=scaledImage;
frontImageView.layer.borderWidth=1.0;
frontImageView.layer.borderColor=[UIColor colorWithRed:255.0 green:255.0 blue:255.0 alpha:1.0].CGColor;
[backImageView addSubview:frontImageView];
backImageView.tag=toPage;
frontImageView.tag=toPage;
[scroller addSubview:backImageView];
});
});
You should only ever have 3 images loaded at a maximum - the previous page (if it exists), the current page and the next page.
Any other images you have loaded above this is wasteful because you can't see them and they're just taking up memory for no good reason. If the images aren't too big then you can maintain them in memory and purge them when you get a warning, but for large images this will still generally cause you issues.
If you don't use ARC then add this:
[backImageView autorelease];
[frontImageView autorelease];
I was running some CIFilters to blur graphics and it was very laggy so I wrapped my code in
dispatch_async(dispatch_get_main_queue(), ^{ /*...*/ });
Everything sped up and it ROCKED! Very fast processing, seamless blurring, great!
After about a minute though the app crashes with 250Mb memory (when I don't use dispatch I only use around 50Mb memory consistently because ARC manages it all)
I used ARC for my whole project, so I tried manually managing memory by releasing CIFilters inside my dispatch thread, but xCode keeps returning errors and won't let me manually release since I'm using ARC. At this point it would be an insane hassle to turn off ARC and go through every .m file and manually manage memory.
So how do I specifically manage memory inside dispatch for my CIFilters?
I tried wrapping it all in an #autoreleasepool { /*...*/ } (Which ARC strangely allows?) But it did not work. /:
Example code inside dispatch thread:
UIImage *theImage5 = imageViewImDealingWith.image;
CIContext *context5 = [CIContext contextWithOptions:nil];
CIImage *inputImage5 = [CIImage imageWithCGImage:theImage5.CGImage];
// setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter5 = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter5 setValue:inputImage5 forKey:kCIInputImageKey];
[filter5 setValue:[NSNumber numberWithFloat:5.00f] forKey:#"inputRadius"];
CIImage *result = [filter5 valueForKey:kCIOutputImageKey];
// CIGaussianBlur has a tendency to shrink the image a little,
// this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [context5 createCGImage:result fromRect:[inputImage5 extent]];
imageViewImDealingWith.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
context5 = nil;
inputImage5 = nil;
filter5 = nil;
result = nil;
do you release the cgImage?
CGImageRelease(cgImage);