GPUImage GPUImageChromaKeyBlendFilter With Still Image - ios

I would like to create a GPUImageView to display a filter in real time (as opposed to keep reading imageFromCurrentlyProcessedOutput)
Is it possible to use GPUImage's GPUImageChromaKeyBlendFilter with a still source image automatically updating a GPUImageView?
Here is my code reading this into a UIImage;
UIImage *inputImage = [UIImage imageNamed:#"1.JPG"];
UIImage *backgroundImage = [UIImage imageNamed:#"2.JPG"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageChromaKeyBlendFilter *stillImageFilter = [[GPUImageChromaKeyBlendFilter alloc] init];
[stillImageFilter setThresholdSensitivity:0.5];
[stillImageFilter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[stillImageSource addTarget:stillImageFilter];
[stillImageSource processImage];
UIImage *currentFilteredVideoFrame = [stillImageFilter imageByFilteringImage: backgroundImage ];
Everything I have tried so far requires you to add the 'backgroundImage' as a target to the filter (as you would if you were using the StillCamera). If you add the backgroundImage as a target, GPUImage just uses this new images as it's base image.
Can anyone help?
Thanks,

Don't use -imageByFilteringImage: with a two-input filter, like blends. It's a convenience method to quickly set up a small filter chain based on a UIImage and grab a UIImage out. You're not going to want it for something targeting a GPUImageView, anyway.
For the chroma key blend, you'll need to target your input image (the one with the color to be replaced) and background image to the blend, in that order using -addTarget, with GPUImagePicture instances for both. You then target your blend to the GPUImageView.
One note, you'll need to maintain strong references to your GPUImagePictures past the setup method, if you want to keep updating the filter after this point, so you may need to make them instance variables on your controller class.
Once you've set things up in this way, the result will go to your GPUImageView. Every time you call -processImage on one of the two images, the display in your GPUImageView will be updated. Therefore, you can call that after every change in filter settings, like if you had a slider to update filter values, and the image will be updated in realtime.

Related

GPUImage crashing in iOS 8

I have implemented a filter tool mechanism which has many filters. Each filter contains 2-3 different filters i.e. i am using GPUImageFilterGroup for this. Now when i updated GPU Image Library for iOS 8 compatible it shows "Instance Method prepareForImageCapture not found" and app crashes.
I also tried to implement the following code
GPUImageFilterGroup *filter = [[GPUImageFilterGroup alloc] init];
GPUImageRGBFilter *stillImageFilter1 = [[GPUImageRGBFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter1.red = 0.2;
stillImageFilter1.green = 0.8;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter1];
GPUImageVignetteFilter *stillImageFilter2 = [[GPUImageVignetteFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter2.vignetteStart = 0.32;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter2];
[stillImageFilter1 addTarget:stillImageFilter2];
[(GPUImageFilterGroup *)filter setInitialFilters:[NSArray arrayWithObject:stillImageFilter1]];
[(GPUImageFilterGroup *)filter setTerminalFilter:stillImageFilter2];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
[stillImageSource addTarget:(GPUImageFilterGroup *)filter];
[stillImageSource processImage];
UIImage *img = [(GPUImageFilterGroup *)filter imageFromCurrentFramebuffer];
Its returning nil image. Can anyone tell me whats the correct way!!!
Thanks in advance.
First, that wasn't a crash for iOS 8. You haven't updated your copy of GPUImage in a while, and that method was removed months ago in an update unrelated to any iOS compatibility. The reasons for this are explained here and I'll once again quote the relevant paragraph:
This does add one slight wrinkle to the interface, though, and I've
changed some method names to make this clear to anyone updating their
code. Because framebuffers are now transient, if you want to capture
an image from one of them, you have to tag it before processing. You
do this by using the -useNextFrameForImageCapture method on the filter
to indicate that the next time an image is passed down the filter
chain, you're going to want to hold on to that framebuffer for a
little longer to grab an image out of it. -imageByFilteringImage:
automatically does this for you now, and I've added another
convenience method in -processImageUpToFilter:withCompletionHandler:
to do this in an asynchronous manner.
As you can see, -prepareForImageCapture was removed because it was useless in the new caching system.
The reason why your updated code is returning nil is that you've called -useNextFrameForImageCapture on the wrong filter. It needs to be called on your terminal filter in the group (stillImageFilter2) and only needs to be called once, right before you call -processImage. That signifies that this particular framebuffer needs to hang around long enough to have an image captured from it.
You honestly don't need a GPUImageFilterGroup in the above, as it only complicates your filter chaining.

How to use ChromaKey and Sepia filter with GPUImage at the same time?

I'm using for the first time the GPUImage framework of Brad Larson.
I don't know if it's possible, but I would like to use the GPUImageChromaKeyFilter and GPUImageSepiaFilter. I can use them separately, but at the same time, it doesn't work.
The sepia tone works, but the chromaKey seems doesn't work.
EDIT 2: WORKING
Here is my code:
- (void)setupCameraAndFilters:(AVCaptureDevicePosition)cameraPostion {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:cameraPostion];
videoCamera.outputImageOrientation = UIInterfaceOrientationLandscapeRight;
// ChromaKey
chromaKeyFilter = [[GPUImageChromaKeyBlendFilter alloc] init];
[(GPUImageChromaKeyBlendFilter *)chromaKeyFilter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[videoCamera addTarget:chromaKeyFilter];
// Input image (replace the green background)
UIImage *inputImage;
inputImage = [UIImage imageNamed:#"chromaBackground.jpg"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture processImage];
[sourcePicture addTarget:chromaKeyFilter];
// Sepia filter
sepiaFilter = [[GPUImageSepiaFilter alloc] init];
[chromaKeyFilter addTarget:sepiaFilter];
[sepiaFilter addTarget:self.filteredVideoView];
[videoCamera startCameraCapture];
}
Your problem is that the above code doesn't really make sense. In the first example, you have your still image going into the single-input GPUImageChromaKeyFilter, then you try to target both that source image and your video feed to the single-input GPUImageSepiaFilter. One of those two inputs will be overridden by the other.
GPUImageFilterGroups are merely convenience classes for grouping sequences of filters together an an easy-to-reuse package, and won't solve anything here.
If you're trying to blend video with a chroma-keyed image, you need to use a GPUImageChromaKeyBlendFilter, which takes two inputs and blends them together based on the keying. You can then send that single output image to the sepia tone filter, or however you want to sequence that.
You have to use GPUImageFilterGroup filter in order to accomplish what you want. In the examples of the GPUImage you can find how to achieve this. Good Luck!

Change brightness of an image via uislider and gpuimage filter

I wrote this code to change the brightness of an UIImage via an UISlider and the GPUImageBrightnessFilter. But every time I'll test it the app crashes.
My code:
- (IBAction)sliderBrightness:(id)sender {
CGFloat midpoint = [(UISlider *)sender value];
[(GPUImageTiltShiftFilter *)brightnessFilter setTopFocusLevel:midpoint - 0.1];
[(GPUImageTiltShiftFilter *)brightnessFilter setBottomFocusLevel:midpoint + 0.1];
[sourcePicture processImage];
}
- (void) brightnessFilter {
UIImage *inputImage = imgView.image;
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
brightnessFilter = [[GPUImageTiltShiftFilter alloc] init];
// sepiaFilter = [[GPUImageSobelEdgeDetectionFilter alloc] init];
GPUImageView *imageView = (GPUImageView *)self.view;
[brightnessFilter forceProcessingAtSize:imageView.sizeInPixels]; // This is now needed to make the filter run at the smaller output size
[sourcePicture addTarget:brightnessFilter];
[brightnessFilter addTarget:imageView];
[sourcePicture processImage];
}
Let me make an alternative architectural suggestion. Instead of creating a GPUImagePicture and GPUImageBrightnessFilter each time you change the brightness, then saving that out as a UIImage to a UIImageView, it would be far more efficient to reuse the initial picture and filter and render that to a GPUImageView.
Take a look at what I do in the SimpleImageFilter example that comes with GPUImage. For the tilt-shifted image that's displayed to the screen, I create a GPUImagePicture of the source image once, create one instance of the tilt-shift filter, and then send the output to a GPUImageView. This avoids the expensive (both performance and memory-wise) process of going to a UIImage and then displaying that in a UIImageView, and will be much, much faster. While you're at it, you can use -forceProcessingAtSize: on your filter to only render as many pixels as will be displayed in your final view, also speeding things up.
When you have the right settings for filtering your image, and you want the final UIImage out, you can do one last render pass to extract the processed UIImage. You'd set your forced size back to 0 right before doing that, so you now process the full image.

GPUImage blend filters

I'm trying to apply a blend filters to 2 images.
I've recently updated GPUImage to the last version.
To make things simple I've modified the example SimpleImageFilter.
Here is the code:
UIImage * image1 = [UIImage imageNamed:#"PGSImage_0000.jpg"];
UIImage * image2 = [UIImage imageNamed:#"PGSImage_0001.jpg"];
twoinputFilter = [[GPUImageColorBurnBlendFilter alloc] init];
sourcePicture1 = [[GPUImagePicture alloc] initWithImage:image1 ];
sourcePicture2 = [[GPUImagePicture alloc] initWithImage:image2 ];
[sourcePicture1 addTarget:twoinputFilter];
[sourcePicture1 processImage];
[sourcePicture2 addTarget:twoinputFilter];
[sourcePicture2 processImage];
UIImage * image = [twoinputFilter imageFromCurrentFramebuffer];
The image returned is nil.Applying some breakpoints I can see that the filter fails inside the method - (CGImageRef)newCGImageFromCurrentlyProcessedOutput the problem is that the framebufferForOutput is nil.I'm using simulator.
I don't get why it isn't working.
It seems that I was missing this command, as written in the documentation for still image processing:
Note that for a manual capture of an image from a filter, you need to
set -useNextFrameForImageCapture in order to tell the filter that
you'll be needing to capture from it later. By default, GPUImage
reuses framebuffers within filters to conserve memory, so if you need
to hold on to a filter's framebuffer for manual image capture, you
need to let it know ahead of time.
[twoinputFilter useNextFrameForImageCapture];

GPUImagePicture with a GPUImageBuffer target?

I'm trying to do the following to display an image instead of trying to access video when TARGET_IPHONE_SIMULATOR is true.
UIImage *image = [UIImage imageNamed:#"fake_camera"];
GPUImagePicture *fakeInput = [[GPUImagePicture alloc] initWithImage:image];
GPUImageBuffer *videoBuffer = [[GPUImageBuffer alloc] init];
[fakeInput processImage];
[fakeInput addTarget:videoBuffer];
[videoBuffer addTarget:self.backgroundImageView]; //backgroundImageView is a GPUImageView
This renders my backgroundImageView in black color without displaying my image.
If I send the output of fakeInput to backgroundImageView directly, I see the picture rendered normally in backgroundImageView.
What's going on here?
EDIT:
As Brad recommended I tried:
UIImage *image = [UIImage imageNamed:#"fake_camera"];
_fakeInput = [[GPUImagePicture alloc] initWithImage:image];
GPUImagePicture *secondFakeInput = [[GPUImagePicture alloc] initWithImage:image];
[_fakeInput processImage];
[secondFakeInput processImage];
[_fakeInput addTarget:_videoBuffer];
[secondFakeInput addTarget:_videoBuffer];
[_videoBuffer addTarget:_backgroundImageView];
I also tried:
UIImage *image = [UIImage imageNamed:#"fake_camera"];
_fakeInput = [[GPUImagePicture alloc] initWithImage:image];
[_fakeInput processImage];
[_fakeInput processImage];
[_fakeInput addTarget:_videoBuffer];
[_videoBuffer addTarget:_backgroundImageView];
None of this two approaches seems to work... should they?
A GPUImageBuffer does as its name suggests, it buffers frames. If you send a still photo to it, that one image is buffered, but is not yet sent out. You'd need to send a second image into it (or use -processImage a second time) to get the default buffer of one frame deep to display your original frame.
GPUImageBuffer really doesn't serve any purpose for still images. It's intended as a frame-delaying operation for video in order to do frame-to-frame comparisons, like a low-pass filter. If you need to do frame comparisons of still images, a blend is a better way to go.

Resources