iOS - Reduce GPUImage RAM usage - ios

I am creating a filter with GPUImage. The image displays fine. I also have a UISlider that the user can slide to change alpha to the filter.
Here is how I setup my filter:
-(void)setupFirstFilter
{
imageWithOpacity = [ImageWithAlpha imageByApplyingAlpha:0.43f image:scaledImage];
pictureWithOpacity = [[GPUImagePicture alloc] initWithCGImage:[imageWithOpacity CGImage] smoothlyScaleOutput:YES];
originalPicture = [[GPUImagePicture alloc] initWithCGImage:[scaledImage CGImage] smoothlyScaleOutput:YES];
multiplyBlender = [[GPUImageMultiplyBlendFilter alloc] init];
[originalPicture addTarget:multiplyBlender];
[pictureWithOpacity addTarget:multiplyBlender];
UIImage *pinkImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:255.0f/255.0f green:185.0f/255.0f blue:200.0f/255.0f alpha:0.21f]];
pinkPicture = [[GPUImagePicture alloc] initWithCGImage:[pinkImage CGImage] smoothlyScaleOutput:YES];
overlayBlender = [[GPUImageOverlayBlendFilter alloc] init];
[multiplyBlender addTarget:overlayBlender];
[pinkPicture addTarget:overlayBlender];
UIImage *blueImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:185.0f/255.0f green:227.0f/255.0f blue:255.0f/255.0f alpha:0.21f]];
bluePicture = [[GPUImagePicture alloc] initWithCGImage:[blueImage CGImage] smoothlyScaleOutput:YES];
secondOverlayBlend = [[GPUImageOverlayBlendFilter alloc] init];
[overlayBlender addTarget:secondOverlayBlend];
[bluePicture addTarget:secondOverlayBlend];
[secondOverlayBlend addTarget:self.editImageView];
[originalPicture processImage];
[pictureWithOpacity processImage];
[pinkPicture processImage];
[bluePicture processImage];
}
And when the slider is changed this gets called:
-(void)sliderChanged:(id)sender
{
UISlider *slider = (UISlider*)sender;
double value = slider.value;
[originalPicture addTarget:multiplyBlender];
[pictureWithOpacity addTarget:multiplyBlender];
UIImage *pinkImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:255.0f/255.0f green:185.0f/255.0f blue:200.0f/255.0f alpha:value]];
pinkPicture = [[GPUImagePicture alloc] initWithCGImage:[pinkImage CGImage] smoothlyScaleOutput:NO];
[multiplyBlender addTarget:overlayBlender];
[pinkPicture addTarget:overlayBlender];
[overlayBlender addTarget:secondOverlayBlend];
[bluePicture addTarget:secondOverlayBlend];
[secondOverlayBlend addTarget:self.editImageView];
[originalPicture processImage];
[pictureWithOpacity processImage];
[pinkPicture processImage];
[bluePicture processImage];
}
The code above works fine. But the slide is slow and this is taking up to 170 MB or RAM. Before pressing to use filter it is around 30 MB RAM. How can I reduce the RAM by doing this filter?
I already reduce the image size.
Any help is greatly appreciated.

My first suggestion is to get rid of the single-color UIImages and their corresponding GPUImagePicture instances. Instead, use a GPUImageSolidColorGenerator, which does this solid-color generation entirely on the GPU. Make it output a small image size and that will be scaled up to fit your larger image. That will save on the memory required for your UIImages and avoid a costly draw / upload process.
Ultimately, however, I'd recommend making your own custom filter rather than running multiple blend steps using multiple input images. All that you're doing is applying a color modification to your source image, which can be done inside a single custom filter.
You could pass in your colors to a shader that applies two mix() operations, one for each color. The strength of each mix value would correspond to the alpha you're using in the above for each solid color. That would reduce this down to one input image and one processing step, rather than three input images and two steps. It would be faster and use significantly less memory.

Related

GPUImageLookupFilter is not giving desired result for the following LUT image

I am having this LUT png and when applying this LUT on my image then is not giving the right result. Is my LUT is having a different format or I am not applying filter properly.
UIImage *lutimage = [UIImage imageNamed:#"lut.png"];
GPUImagePicture *lookupImageSource = [[GPUImagePicture alloc] initWithImage:lutimage];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:imported_image];
GPUImageLookupFilter *lookupFilter = [[GPUImageLookupFilter alloc] init];
lookupFilter.intensity = 1.0;
[lookupFilter setInputRotation:kGPUImageRotateRight atIndex:1];
[stillImageSource addTarget:lookupFilter];
[lookupImageSource addTarget:lookupFilter];
[lookupFilter useNextFrameForImageCapture];
[stillImageSource processImage];
[lookupImageSource processImage];
imgview_main.image = [lookupFilter imageFromCurrentFramebuffer];
[[GPUImageContext sharedFramebufferCache] purgeAllUnassignedFramebuffers];
RESULT IS :
BUT RESULT SHOULD BE LIKE THIS:
A HALD Look Up Table is a way of translating one input color to one output color.
The position in the HALD image is related to the pixels color NOT it's position on the image.
What you've done is add a vignette to a color look up table, so in the bottom right corner of your LUT for instance, you've made that darker. What happens when you apply this LUT to an image is the bright pixels in the image become darker - just as you've told it to. The position in the LUT DOES NOT relate to the images pixels positions, only the RGB intensities.
You need some kind of overlay filter (using multiply perhaps) whose location in the overlay DOES correspond to the location in the picture you want changed.
TL;DR: You're using some color look up translation code, when you should be using some code to add an image overlay/mask.

Change brightness of an image via uislider and gpuimage filter

I wrote this code to change the brightness of an UIImage via an UISlider and the GPUImageBrightnessFilter. But every time I'll test it the app crashes.
My code:
- (IBAction)sliderBrightness:(id)sender {
CGFloat midpoint = [(UISlider *)sender value];
[(GPUImageTiltShiftFilter *)brightnessFilter setTopFocusLevel:midpoint - 0.1];
[(GPUImageTiltShiftFilter *)brightnessFilter setBottomFocusLevel:midpoint + 0.1];
[sourcePicture processImage];
}
- (void) brightnessFilter {
UIImage *inputImage = imgView.image;
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
brightnessFilter = [[GPUImageTiltShiftFilter alloc] init];
// sepiaFilter = [[GPUImageSobelEdgeDetectionFilter alloc] init];
GPUImageView *imageView = (GPUImageView *)self.view;
[brightnessFilter forceProcessingAtSize:imageView.sizeInPixels]; // This is now needed to make the filter run at the smaller output size
[sourcePicture addTarget:brightnessFilter];
[brightnessFilter addTarget:imageView];
[sourcePicture processImage];
}
Let me make an alternative architectural suggestion. Instead of creating a GPUImagePicture and GPUImageBrightnessFilter each time you change the brightness, then saving that out as a UIImage to a UIImageView, it would be far more efficient to reuse the initial picture and filter and render that to a GPUImageView.
Take a look at what I do in the SimpleImageFilter example that comes with GPUImage. For the tilt-shifted image that's displayed to the screen, I create a GPUImagePicture of the source image once, create one instance of the tilt-shift filter, and then send the output to a GPUImageView. This avoids the expensive (both performance and memory-wise) process of going to a UIImage and then displaying that in a UIImageView, and will be much, much faster. While you're at it, you can use -forceProcessingAtSize: on your filter to only render as many pixels as will be displayed in your final view, also speeding things up.
When you have the right settings for filtering your image, and you want the final UIImage out, you can do one last render pass to extract the processed UIImage. You'd set your forced size back to 0 right before doing that, so you now process the full image.

GPUImage blend filters

I'm trying to apply a blend filters to 2 images.
I've recently updated GPUImage to the last version.
To make things simple I've modified the example SimpleImageFilter.
Here is the code:
UIImage * image1 = [UIImage imageNamed:#"PGSImage_0000.jpg"];
UIImage * image2 = [UIImage imageNamed:#"PGSImage_0001.jpg"];
twoinputFilter = [[GPUImageColorBurnBlendFilter alloc] init];
sourcePicture1 = [[GPUImagePicture alloc] initWithImage:image1 ];
sourcePicture2 = [[GPUImagePicture alloc] initWithImage:image2 ];
[sourcePicture1 addTarget:twoinputFilter];
[sourcePicture1 processImage];
[sourcePicture2 addTarget:twoinputFilter];
[sourcePicture2 processImage];
UIImage * image = [twoinputFilter imageFromCurrentFramebuffer];
The image returned is nil.Applying some breakpoints I can see that the filter fails inside the method - (CGImageRef)newCGImageFromCurrentlyProcessedOutput the problem is that the framebufferForOutput is nil.I'm using simulator.
I don't get why it isn't working.
It seems that I was missing this command, as written in the documentation for still image processing:
Note that for a manual capture of an image from a filter, you need to
set -useNextFrameForImageCapture in order to tell the filter that
you'll be needing to capture from it later. By default, GPUImage
reuses framebuffers within filters to conserve memory, so if you need
to hold on to a filter's framebuffer for manual image capture, you
need to let it know ahead of time.
[twoinputFilter useNextFrameForImageCapture];

GPUImagePicture with a GPUImageBuffer target?

I'm trying to do the following to display an image instead of trying to access video when TARGET_IPHONE_SIMULATOR is true.
UIImage *image = [UIImage imageNamed:#"fake_camera"];
GPUImagePicture *fakeInput = [[GPUImagePicture alloc] initWithImage:image];
GPUImageBuffer *videoBuffer = [[GPUImageBuffer alloc] init];
[fakeInput processImage];
[fakeInput addTarget:videoBuffer];
[videoBuffer addTarget:self.backgroundImageView]; //backgroundImageView is a GPUImageView
This renders my backgroundImageView in black color without displaying my image.
If I send the output of fakeInput to backgroundImageView directly, I see the picture rendered normally in backgroundImageView.
What's going on here?
EDIT:
As Brad recommended I tried:
UIImage *image = [UIImage imageNamed:#"fake_camera"];
_fakeInput = [[GPUImagePicture alloc] initWithImage:image];
GPUImagePicture *secondFakeInput = [[GPUImagePicture alloc] initWithImage:image];
[_fakeInput processImage];
[secondFakeInput processImage];
[_fakeInput addTarget:_videoBuffer];
[secondFakeInput addTarget:_videoBuffer];
[_videoBuffer addTarget:_backgroundImageView];
I also tried:
UIImage *image = [UIImage imageNamed:#"fake_camera"];
_fakeInput = [[GPUImagePicture alloc] initWithImage:image];
[_fakeInput processImage];
[_fakeInput processImage];
[_fakeInput addTarget:_videoBuffer];
[_videoBuffer addTarget:_backgroundImageView];
None of this two approaches seems to work... should they?
A GPUImageBuffer does as its name suggests, it buffers frames. If you send a still photo to it, that one image is buffered, but is not yet sent out. You'd need to send a second image into it (or use -processImage a second time) to get the default buffer of one frame deep to display your original frame.
GPUImageBuffer really doesn't serve any purpose for still images. It's intended as a frame-delaying operation for video in order to do frame-to-frame comparisons, like a low-pass filter. If you need to do frame comparisons of still images, a blend is a better way to go.

Multiple Filters using GPUImage Library

I am trying to apply 3 filters to an image.
One rgbFilter which is has its values constant, a brightness filter and a saturation filter, both of which should be able to be modified and the image should update.
I have followed the advice here.
I have setup a UIView using IB and set its class to GPUImageView. For some reason the image doesnt show.
My steps are as follows:
self.gpuImagePicture = [[GPUImagePicture alloc] initWithImage:image];
[self.gpuImagePicture addTarget:self.brightnessFilter];
[self.brightnessFilter addTarget:self.contrastFilter];
[self.contrastFilter addTarget:self.imageView];
and then I call this which sets the constant values on the rgb filter
[self setRGBFilterValues]
I setup my filters before this using:
- (void) setupFilters
{
self.brightnessFilter = [[GPUImageBrightnessFilter alloc] init];
self.contrastFilter = [[GPUImageContrastFilter alloc] init];
self.rgbFilter = [[GPUImageRGBFilter alloc] init];
}
Am I missing a step or why is the image just displaying nothing?
You're missing one step. You need to call -processImage on your GPUImagePicture instance to get it to propagate through the filter chain.
You also need to call this anytime you change values within your filter chain and wish to update the final output.
For my first time using this GPUImage library, it took me way too long to figure out how to simply apply multiple filters to a single image. The link provided by the OP does help explain why the API is relatively complex (one reason: you must specify the order in which the filters are applied).
For future reference, here's my code to apply two filters:
UIImage *initialImage = ...
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:initialImage];
GPUImageSaturationFilter *saturationFilter = [[GPUImageSaturationFilter alloc] init];
saturationFilter.saturation = 0.5;
[stillImageSource addTarget:saturationFilter];
GPUImageGaussianBlurFilter *blurFilter = [[GPUImageGaussianBlurFilter alloc] init];
blurFilter.blurRadiusInPixels = 10;
[saturationFilter addTarget:blurFilter];
GPUImageFilter *lastFilter = blurFilter;
[lastFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *processedImage = [lastFilter imageFromCurrentFramebuffer];

Resources