Multiple Filters using GPUImage Library - ios

I am trying to apply 3 filters to an image.
One rgbFilter which is has its values constant, a brightness filter and a saturation filter, both of which should be able to be modified and the image should update.
I have followed the advice here.
I have setup a UIView using IB and set its class to GPUImageView. For some reason the image doesnt show.
My steps are as follows:
self.gpuImagePicture = [[GPUImagePicture alloc] initWithImage:image];
[self.gpuImagePicture addTarget:self.brightnessFilter];
[self.brightnessFilter addTarget:self.contrastFilter];
[self.contrastFilter addTarget:self.imageView];
and then I call this which sets the constant values on the rgb filter
[self setRGBFilterValues]
I setup my filters before this using:
- (void) setupFilters
{
self.brightnessFilter = [[GPUImageBrightnessFilter alloc] init];
self.contrastFilter = [[GPUImageContrastFilter alloc] init];
self.rgbFilter = [[GPUImageRGBFilter alloc] init];
}
Am I missing a step or why is the image just displaying nothing?

You're missing one step. You need to call -processImage on your GPUImagePicture instance to get it to propagate through the filter chain.
You also need to call this anytime you change values within your filter chain and wish to update the final output.

For my first time using this GPUImage library, it took me way too long to figure out how to simply apply multiple filters to a single image. The link provided by the OP does help explain why the API is relatively complex (one reason: you must specify the order in which the filters are applied).
For future reference, here's my code to apply two filters:
UIImage *initialImage = ...
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:initialImage];
GPUImageSaturationFilter *saturationFilter = [[GPUImageSaturationFilter alloc] init];
saturationFilter.saturation = 0.5;
[stillImageSource addTarget:saturationFilter];
GPUImageGaussianBlurFilter *blurFilter = [[GPUImageGaussianBlurFilter alloc] init];
blurFilter.blurRadiusInPixels = 10;
[saturationFilter addTarget:blurFilter];
GPUImageFilter *lastFilter = blurFilter;
[lastFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *processedImage = [lastFilter imageFromCurrentFramebuffer];

Related

iOS - Reduce GPUImage RAM usage

I am creating a filter with GPUImage. The image displays fine. I also have a UISlider that the user can slide to change alpha to the filter.
Here is how I setup my filter:
-(void)setupFirstFilter
{
imageWithOpacity = [ImageWithAlpha imageByApplyingAlpha:0.43f image:scaledImage];
pictureWithOpacity = [[GPUImagePicture alloc] initWithCGImage:[imageWithOpacity CGImage] smoothlyScaleOutput:YES];
originalPicture = [[GPUImagePicture alloc] initWithCGImage:[scaledImage CGImage] smoothlyScaleOutput:YES];
multiplyBlender = [[GPUImageMultiplyBlendFilter alloc] init];
[originalPicture addTarget:multiplyBlender];
[pictureWithOpacity addTarget:multiplyBlender];
UIImage *pinkImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:255.0f/255.0f green:185.0f/255.0f blue:200.0f/255.0f alpha:0.21f]];
pinkPicture = [[GPUImagePicture alloc] initWithCGImage:[pinkImage CGImage] smoothlyScaleOutput:YES];
overlayBlender = [[GPUImageOverlayBlendFilter alloc] init];
[multiplyBlender addTarget:overlayBlender];
[pinkPicture addTarget:overlayBlender];
UIImage *blueImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:185.0f/255.0f green:227.0f/255.0f blue:255.0f/255.0f alpha:0.21f]];
bluePicture = [[GPUImagePicture alloc] initWithCGImage:[blueImage CGImage] smoothlyScaleOutput:YES];
secondOverlayBlend = [[GPUImageOverlayBlendFilter alloc] init];
[overlayBlender addTarget:secondOverlayBlend];
[bluePicture addTarget:secondOverlayBlend];
[secondOverlayBlend addTarget:self.editImageView];
[originalPicture processImage];
[pictureWithOpacity processImage];
[pinkPicture processImage];
[bluePicture processImage];
}
And when the slider is changed this gets called:
-(void)sliderChanged:(id)sender
{
UISlider *slider = (UISlider*)sender;
double value = slider.value;
[originalPicture addTarget:multiplyBlender];
[pictureWithOpacity addTarget:multiplyBlender];
UIImage *pinkImage = [ImageFromColor imageFromColor:[UIColor colorWithRed:255.0f/255.0f green:185.0f/255.0f blue:200.0f/255.0f alpha:value]];
pinkPicture = [[GPUImagePicture alloc] initWithCGImage:[pinkImage CGImage] smoothlyScaleOutput:NO];
[multiplyBlender addTarget:overlayBlender];
[pinkPicture addTarget:overlayBlender];
[overlayBlender addTarget:secondOverlayBlend];
[bluePicture addTarget:secondOverlayBlend];
[secondOverlayBlend addTarget:self.editImageView];
[originalPicture processImage];
[pictureWithOpacity processImage];
[pinkPicture processImage];
[bluePicture processImage];
}
The code above works fine. But the slide is slow and this is taking up to 170 MB or RAM. Before pressing to use filter it is around 30 MB RAM. How can I reduce the RAM by doing this filter?
I already reduce the image size.
Any help is greatly appreciated.
My first suggestion is to get rid of the single-color UIImages and their corresponding GPUImagePicture instances. Instead, use a GPUImageSolidColorGenerator, which does this solid-color generation entirely on the GPU. Make it output a small image size and that will be scaled up to fit your larger image. That will save on the memory required for your UIImages and avoid a costly draw / upload process.
Ultimately, however, I'd recommend making your own custom filter rather than running multiple blend steps using multiple input images. All that you're doing is applying a color modification to your source image, which can be done inside a single custom filter.
You could pass in your colors to a shader that applies two mix() operations, one for each color. The strength of each mix value would correspond to the alpha you're using in the above for each solid color. That would reduce this down to one input image and one processing step, rather than three input images and two steps. It would be faster and use significantly less memory.

GPUImage crashing in iOS 8

I have implemented a filter tool mechanism which has many filters. Each filter contains 2-3 different filters i.e. i am using GPUImageFilterGroup for this. Now when i updated GPU Image Library for iOS 8 compatible it shows "Instance Method prepareForImageCapture not found" and app crashes.
I also tried to implement the following code
GPUImageFilterGroup *filter = [[GPUImageFilterGroup alloc] init];
GPUImageRGBFilter *stillImageFilter1 = [[GPUImageRGBFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter1.red = 0.2;
stillImageFilter1.green = 0.8;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter1];
GPUImageVignetteFilter *stillImageFilter2 = [[GPUImageVignetteFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter2.vignetteStart = 0.32;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter2];
[stillImageFilter1 addTarget:stillImageFilter2];
[(GPUImageFilterGroup *)filter setInitialFilters:[NSArray arrayWithObject:stillImageFilter1]];
[(GPUImageFilterGroup *)filter setTerminalFilter:stillImageFilter2];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
[stillImageSource addTarget:(GPUImageFilterGroup *)filter];
[stillImageSource processImage];
UIImage *img = [(GPUImageFilterGroup *)filter imageFromCurrentFramebuffer];
Its returning nil image. Can anyone tell me whats the correct way!!!
Thanks in advance.
First, that wasn't a crash for iOS 8. You haven't updated your copy of GPUImage in a while, and that method was removed months ago in an update unrelated to any iOS compatibility. The reasons for this are explained here and I'll once again quote the relevant paragraph:
This does add one slight wrinkle to the interface, though, and I've
changed some method names to make this clear to anyone updating their
code. Because framebuffers are now transient, if you want to capture
an image from one of them, you have to tag it before processing. You
do this by using the -useNextFrameForImageCapture method on the filter
to indicate that the next time an image is passed down the filter
chain, you're going to want to hold on to that framebuffer for a
little longer to grab an image out of it. -imageByFilteringImage:
automatically does this for you now, and I've added another
convenience method in -processImageUpToFilter:withCompletionHandler:
to do this in an asynchronous manner.
As you can see, -prepareForImageCapture was removed because it was useless in the new caching system.
The reason why your updated code is returning nil is that you've called -useNextFrameForImageCapture on the wrong filter. It needs to be called on your terminal filter in the group (stillImageFilter2) and only needs to be called once, right before you call -processImage. That signifies that this particular framebuffer needs to hang around long enough to have an image captured from it.
You honestly don't need a GPUImageFilterGroup in the above, as it only complicates your filter chaining.

How to use ChromaKey and Sepia filter with GPUImage at the same time?

I'm using for the first time the GPUImage framework of Brad Larson.
I don't know if it's possible, but I would like to use the GPUImageChromaKeyFilter and GPUImageSepiaFilter. I can use them separately, but at the same time, it doesn't work.
The sepia tone works, but the chromaKey seems doesn't work.
EDIT 2: WORKING
Here is my code:
- (void)setupCameraAndFilters:(AVCaptureDevicePosition)cameraPostion {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:cameraPostion];
videoCamera.outputImageOrientation = UIInterfaceOrientationLandscapeRight;
// ChromaKey
chromaKeyFilter = [[GPUImageChromaKeyBlendFilter alloc] init];
[(GPUImageChromaKeyBlendFilter *)chromaKeyFilter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[videoCamera addTarget:chromaKeyFilter];
// Input image (replace the green background)
UIImage *inputImage;
inputImage = [UIImage imageNamed:#"chromaBackground.jpg"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture processImage];
[sourcePicture addTarget:chromaKeyFilter];
// Sepia filter
sepiaFilter = [[GPUImageSepiaFilter alloc] init];
[chromaKeyFilter addTarget:sepiaFilter];
[sepiaFilter addTarget:self.filteredVideoView];
[videoCamera startCameraCapture];
}
Your problem is that the above code doesn't really make sense. In the first example, you have your still image going into the single-input GPUImageChromaKeyFilter, then you try to target both that source image and your video feed to the single-input GPUImageSepiaFilter. One of those two inputs will be overridden by the other.
GPUImageFilterGroups are merely convenience classes for grouping sequences of filters together an an easy-to-reuse package, and won't solve anything here.
If you're trying to blend video with a chroma-keyed image, you need to use a GPUImageChromaKeyBlendFilter, which takes two inputs and blends them together based on the keying. You can then send that single output image to the sepia tone filter, or however you want to sequence that.
You have to use GPUImageFilterGroup filter in order to accomplish what you want. In the examples of the GPUImage you can find how to achieve this. Good Luck!

Change brightness of an image via uislider and gpuimage filter

I wrote this code to change the brightness of an UIImage via an UISlider and the GPUImageBrightnessFilter. But every time I'll test it the app crashes.
My code:
- (IBAction)sliderBrightness:(id)sender {
CGFloat midpoint = [(UISlider *)sender value];
[(GPUImageTiltShiftFilter *)brightnessFilter setTopFocusLevel:midpoint - 0.1];
[(GPUImageTiltShiftFilter *)brightnessFilter setBottomFocusLevel:midpoint + 0.1];
[sourcePicture processImage];
}
- (void) brightnessFilter {
UIImage *inputImage = imgView.image;
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
brightnessFilter = [[GPUImageTiltShiftFilter alloc] init];
// sepiaFilter = [[GPUImageSobelEdgeDetectionFilter alloc] init];
GPUImageView *imageView = (GPUImageView *)self.view;
[brightnessFilter forceProcessingAtSize:imageView.sizeInPixels]; // This is now needed to make the filter run at the smaller output size
[sourcePicture addTarget:brightnessFilter];
[brightnessFilter addTarget:imageView];
[sourcePicture processImage];
}
Let me make an alternative architectural suggestion. Instead of creating a GPUImagePicture and GPUImageBrightnessFilter each time you change the brightness, then saving that out as a UIImage to a UIImageView, it would be far more efficient to reuse the initial picture and filter and render that to a GPUImageView.
Take a look at what I do in the SimpleImageFilter example that comes with GPUImage. For the tilt-shifted image that's displayed to the screen, I create a GPUImagePicture of the source image once, create one instance of the tilt-shift filter, and then send the output to a GPUImageView. This avoids the expensive (both performance and memory-wise) process of going to a UIImage and then displaying that in a UIImageView, and will be much, much faster. While you're at it, you can use -forceProcessingAtSize: on your filter to only render as many pixels as will be displayed in your final view, also speeding things up.
When you have the right settings for filtering your image, and you want the final UIImage out, you can do one last render pass to extract the processed UIImage. You'd set your forced size back to 0 right before doing that, so you now process the full image.

How to correctly alternate between different filters of UIImage (GPUImage)

I am working on a filtering application, similar to the GPUImage "filterShowCase" project, but am focusing on filtering static images (selected from the user's library). The main view displays the image selected by the imagepicker as follows:
sourcePicture = [[GPUImagePicture alloc] initWithImage:sourceImage smoothlyScaleOutput:YES];
filter = [[GPUImageSepiaFilter alloc] init];
filterView = [[GPUImageView alloc]initWithFrame:self.view.frame];
filterView.contentMode = UIViewContentModeScaleAspectFit;
filterView.clipsToBounds = YES;
//Setup targets
[sourcePicture addTarget:filter];
[filter addTarget:filterView];
[sourcePicture processImage];
[self.view addSubview:filterView];
This all works, and the image is filtered in sepia. I am allowing a user to change filters based on user input, in aim for quick alternation between different filters - this is all done in a switch statement...
switch (filterNumber)
{
case GPUIMAGE_SEPIA:
{
self.title = #"Sepia Tone";
filter = [[GPUImageSepiaFilter alloc] init];
}; break;
//a bunch of other filters...
}
[sourcePicture removeAllTargets];
[sourcePicture addTarget:filter];
[filter addTarget:filterView];
[sourcePicture processImage];
This (^) is the current process I am using, but there is a small time interval between the filter type selection and the actual modification of the image to match that filter.
I previously tried just doing [sourcePicture processImage] but that didn't work (didn't change the image in the GPUImageView), so what am i doing something wrong? - or is the intended performance of the system?
Thanks!
ANSWER
Look at Brad Larsons' comment to this question.
As Brad Larson wrote, using the [filter forceProcessingAtSize] will reduce the execution time of the filters.
I have an app that sort of does the same, and what I do is that I keep my baseImage at hand, and once I am done applying the filter and showing the image with filter I reset everything. The moment a user selects a different filter I use the baseImage again to create the new filtered image and show it.
So in this example you could for example move the [sourcePicture removeAllTargets]; to right after this [self.view addSubview:filterView];
That saves you work when you want to apply a new filter. Also, there is a quicker (dirtier way to filter a picture when you are working with an image that already exists. I got this from the documentation that came with GPUImage and it works like a charm.
GPUImageSketchFilter *stillImageFilter2 = [[GPUImageSketchFilter alloc] init];
tmp = [stillImageFilter2 imageByFilteringImage:[self baseImage]];
[self.imageView setImage: tmp];
tmp = nil;
Hope it helps you on your way.
Use this method for switch filtering , just pass your filter name string in Method
-(void)filterImage:(NSString *)aStrFilterOption {
CIImage *_inputImage = [CIImage imageWithCGImage:[imgOriginal CGImage]];
CIFilter *filter = [CIFilter filterWithName:aStrFilterOption];
[filter setValue:_inputImage forKey:kCIInputImageKey];
CGImageRef moi3 = [[CIContext contextWithOptions:nil]
createCGImage:filter.outputImage
fromRect:_inputImage.extent];
imgView.image = [UIImage imageWithCGImage:moi3];
CFRelease(moi3);
}

Resources