GPUImageAlphaBlendFilter realtime processing from GPUImageStillCamera source - ios

I am using the GPUImage library and I'm trying to blend two images in realtime, and display them on a GPUImageView. I am trying to alpha-blend plain camera input, with a filtered version of it. Here is what I'm trying to do:
----------------->----v
--camera--| alpha blend ----> image view
-----> color filter --^
I've found some posts about using the blend filters, but they don't seem to be methods for realtime processing. I've found https://github.com/BradLarson/GPUImage/issues/319, GPUImage: blending two images, and https://github.com/BradLarson/GPUImage/issues/751 (but they either aren't for realtime processing, (first and the second), or doesn't work (third one).
I've tried almost everything, but all I'm getting is a white image in the GPUImageView. If I don't use the alpha blend filter, say, just use a false color filter or something similar, it works perfectly. Here is my code:
blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
blendFilter.mix = 0.5;
[blendFilter prepareForImageCapture];
[blendFilter addTarget:imageView];
passThrough = [[GPUImageFilter alloc] init];
[passThrough prepareForImageCapture];
[passThrough addTarget:blendFilter];
selectedFilter = [[GPUImageFalseColorFilter alloc] init];
[selectedFilter prepareForImageCapture];
[selectedFilter addTarget:blendFilter];
stillCamera = [[GPUImageStillCamera alloc] init];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
[stillCamera addTarget:passThrough];
[stillCamera addTarget:selectedFilter];
[stillCamera startCameraCapture];
All I'm getting is a white, blank screen. If I change [selectedFilter addTarget:blendFilter]; to [selectedFilter addTarget:imageView]; then false color filter gets displayed on the image.
There seems to be something wrong with the alpha blend filter. I've read that in some posts that I need to call processImage on the inputs, but those posts are all for non-realtime inputs as far as I understand. How can I get GPUImageAlphaBlendFilter to work in realtime?

Ok, after investigating the issue further over the internet and on project's issue list (https://github.com/BradLarson/GPUImage/issues) and found a workaround. While setting the blend filter as the target, I needed to specify the texture index specifically. For some reason (probably a bug), adding the target blend filter two times doesn't add the second texture correctly at the next index. So setting the texture indices explicitly as 0 and 1 did work:
[passThrough addTarget:blendFilter atTextureLocation:0];
[selectedFilter addTarget:blendFilter atTextureLocation:1];
For the filters that are targets of single sources, addTarget: is enough though such as [stillCamera addTarget:selectedFilter];.

Related

iOS Determine the corners of a Business Card in realtime

I want to implement a business card detecting functionality like this app (https://scanbot.io).
The camera should detect a business card and automatically take a picture of it (only the business card).
My idea was using BradLarson's GPUImage library, detect the corners (using the Harris corner detection algorithm), calculate the biggest rectangle with the corners obtained and crop the image contained inside the rectangle.
Here is my code:
- (void)setupFilter {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
filter = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[(GPUImageHarrisCornerDetectionFilter *)filter setThreshold:0.01f];
[(GPUImageHarrisCornerDetectionFilter *)filter setSensitivity:0.5f];
[(GPUImageHarrisCornerDetectionFilter *)filter setBlurRadiusInPixels:2.0f];
[videoCamera addTarget:filter];
videoCamera.runBenchmark = YES;
GPUImageView *filterview = [[GPUImageView alloc] init];
self.view=filterview;
GPUImageCrosshairGenerator *crosshairGenerator = [[GPUImageCrosshairGenerator alloc] init];
crosshairGenerator.crosshairWidth = 22.0;
[crosshairGenerator forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
[(GPUImageHarrisCornerDetectionFilter *)filter setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime) {
[crosshairGenerator renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
}];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
GPUImageGammaFilter *gammaFilter = [[GPUImageGammaFilter alloc] init];
[videoCamera addTarget:gammaFilter];
[gammaFilter addTarget:blendFilter];
[crosshairGenerator addTarget:blendFilter];
[blendFilter addTarget:filterview];
[videoCamera startCameraCapture];
}
The problem is I don't know how to adjust property the threshold and sensibility attributes
to get the corners (now I'm getting the corners for all the objects in the image).
I also don't know how to work with this GLfloat* cornerArray.
I don't know if I am on the right way... any other ideas about how to implement this functionality or is there any existing library?
Thanks!
Read about Hough Transform. With it, you can detect lines. I would urge you to detect straight lines and then find four lines that are approximately at a right angle to each other and take the rectangle with the biggest area.
The steps would be:
Edge detection using Sobel filter.
Hough transform to find all straight lines in the image.
Look at all parallel lines and then all lines 90 degrees to those parallel line pairs, to find possible rectangles.
Pick the rectangle you like best. This could be by area, or by being best aligned to the phone, or you require that all edges are inside the visible camera image, or some other method.
Lastly: Computer Vision is hard... don't expect easy results.
Addendum
I should note that step 3 above is very simple, because the angle the lines take are simply one dimension of your Hough space. So parallel lines will have in this dimension equal values, and orthogonal lines will be shifted by pi or 90 degrees.

GPUImage crashing in iOS 8

I have implemented a filter tool mechanism which has many filters. Each filter contains 2-3 different filters i.e. i am using GPUImageFilterGroup for this. Now when i updated GPU Image Library for iOS 8 compatible it shows "Instance Method prepareForImageCapture not found" and app crashes.
I also tried to implement the following code
GPUImageFilterGroup *filter = [[GPUImageFilterGroup alloc] init];
GPUImageRGBFilter *stillImageFilter1 = [[GPUImageRGBFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter1.red = 0.2;
stillImageFilter1.green = 0.8;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter1];
GPUImageVignetteFilter *stillImageFilter2 = [[GPUImageVignetteFilter alloc] init];
// [stillImageFilter1 prepareForImageCapture];
stillImageFilter2.vignetteStart = 0.32;
[stillImageFilter1 useNextFrameForImageCapture];
[(GPUImageFilterGroup *)filter addFilter:stillImageFilter2];
[stillImageFilter1 addTarget:stillImageFilter2];
[(GPUImageFilterGroup *)filter setInitialFilters:[NSArray arrayWithObject:stillImageFilter1]];
[(GPUImageFilterGroup *)filter setTerminalFilter:stillImageFilter2];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
[stillImageSource addTarget:(GPUImageFilterGroup *)filter];
[stillImageSource processImage];
UIImage *img = [(GPUImageFilterGroup *)filter imageFromCurrentFramebuffer];
Its returning nil image. Can anyone tell me whats the correct way!!!
Thanks in advance.
First, that wasn't a crash for iOS 8. You haven't updated your copy of GPUImage in a while, and that method was removed months ago in an update unrelated to any iOS compatibility. The reasons for this are explained here and I'll once again quote the relevant paragraph:
This does add one slight wrinkle to the interface, though, and I've
changed some method names to make this clear to anyone updating their
code. Because framebuffers are now transient, if you want to capture
an image from one of them, you have to tag it before processing. You
do this by using the -useNextFrameForImageCapture method on the filter
to indicate that the next time an image is passed down the filter
chain, you're going to want to hold on to that framebuffer for a
little longer to grab an image out of it. -imageByFilteringImage:
automatically does this for you now, and I've added another
convenience method in -processImageUpToFilter:withCompletionHandler:
to do this in an asynchronous manner.
As you can see, -prepareForImageCapture was removed because it was useless in the new caching system.
The reason why your updated code is returning nil is that you've called -useNextFrameForImageCapture on the wrong filter. It needs to be called on your terminal filter in the group (stillImageFilter2) and only needs to be called once, right before you call -processImage. That signifies that this particular framebuffer needs to hang around long enough to have an image captured from it.
You honestly don't need a GPUImageFilterGroup in the above, as it only complicates your filter chaining.

How to use ChromaKey and Sepia filter with GPUImage at the same time?

I'm using for the first time the GPUImage framework of Brad Larson.
I don't know if it's possible, but I would like to use the GPUImageChromaKeyFilter and GPUImageSepiaFilter. I can use them separately, but at the same time, it doesn't work.
The sepia tone works, but the chromaKey seems doesn't work.
EDIT 2: WORKING
Here is my code:
- (void)setupCameraAndFilters:(AVCaptureDevicePosition)cameraPostion {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:cameraPostion];
videoCamera.outputImageOrientation = UIInterfaceOrientationLandscapeRight;
// ChromaKey
chromaKeyFilter = [[GPUImageChromaKeyBlendFilter alloc] init];
[(GPUImageChromaKeyBlendFilter *)chromaKeyFilter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[videoCamera addTarget:chromaKeyFilter];
// Input image (replace the green background)
UIImage *inputImage;
inputImage = [UIImage imageNamed:#"chromaBackground.jpg"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture processImage];
[sourcePicture addTarget:chromaKeyFilter];
// Sepia filter
sepiaFilter = [[GPUImageSepiaFilter alloc] init];
[chromaKeyFilter addTarget:sepiaFilter];
[sepiaFilter addTarget:self.filteredVideoView];
[videoCamera startCameraCapture];
}
Your problem is that the above code doesn't really make sense. In the first example, you have your still image going into the single-input GPUImageChromaKeyFilter, then you try to target both that source image and your video feed to the single-input GPUImageSepiaFilter. One of those two inputs will be overridden by the other.
GPUImageFilterGroups are merely convenience classes for grouping sequences of filters together an an easy-to-reuse package, and won't solve anything here.
If you're trying to blend video with a chroma-keyed image, you need to use a GPUImageChromaKeyBlendFilter, which takes two inputs and blends them together based on the keying. You can then send that single output image to the sepia tone filter, or however you want to sequence that.
You have to use GPUImageFilterGroup filter in order to accomplish what you want. In the examples of the GPUImage you can find how to achieve this. Good Luck!

GPUImage Harris Corner Detection on an existing UIImage gives a black screen output

I've successfully added a crosshair generator and harris corner detection filter onto a GPUImageStillCamera output, as well as on live video from GPUImageVideoCamera.
I'm now trying to get this working on a photo set on a UIImageView, but continually get a black screen as the output. I have been reading the issues listed on GitHub against Brad Larson's GPUImage project, but they seemed to be more in relation to blend type filters, and following the suggestions there I still face the same problem.
I've tried altering every line of code to follow various examples I have seen, and to follow Brad's example code in the Filter demo projects, but the result is always the same.
My current code is, once I've taken a photo (which I check to make sure it is not just a black photo at this point):
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
GPUImageCrosshairGenerator *crossGen = [[GPUImageCrosshairGenerator alloc] init];
crossGen.crosshairWidth = 15.0;
[crossGen forceProcessingAtSize:self.photoView.frame.size];
[cornerFilter1 setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime, BOOL endUpdating)
{
[crossGen renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
}];
[stillImageSource addTarget:crossGen];
[crossGen addTarget:cornerFilter1];
[crossGen prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [crossGen imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I've tried prepareForImageCapture on both filters, on neither, adding the two targets in the opposite order, calling imageFromCurrentlyProcessedOutput on either filter, I've tried it without the crosshair generator, I've tried using local variables and variables declared in the .h file. I've tried with and without forceProcessingAtSize on each of the filters.
I can't think of anything else that I haven't tried to get the output. The app is running on iPhone 7.0, in Xcode 5.0.1. The standard filters work on the photo, e.g. the simple GPUImageSobelEdgeDetectionFilter included in the SimpleImageFilter test app.
Any suggestions? I am saving the output to the camera roll so I can check it's not just me failing to display it correctly. I suspect it's a stupid mistake somewhere but am at a loss as to what else to try now.
Thanks.
Edited to add: the corner detection is definitely working, as depending on the threshold I set, it returns between 6 and 511 corners.
The problem with the above is that you're not chaining filters in the proper order. The Harris corner detector takes in an input image, finds the corners within it, and provides the callback block to return those corners. The GPUImageCrosshairGenerator takes in those points and creates a visual representation of the corners.
What you have in the above code is image->GPUImageCrosshairGenerator-> GPUImageHarrisCornerDetectionFilter, which won't really do anything.
The code in your answer does go directly from the image to the GPUImageHarrisCornerDetectionFilter, but you don't want to use the image output from that. As you saw, it produces an image where the corners are identified by white dots on a black background. Instead, use the callback block, which processes that and returns an array of normalized corner coordinates for you to use.
If you need these to be visible, you could then take that array of coordinates and feed it into the GPUImageCrosshairGenerator to create visible crosshairs, but that image will need to be blended with your original image to make any sense. This is what I do in the FilterShowcase example.
I appear to have fixed the problem, with trying different variations again and now the returned image is black but there are white dots in the locations of the found corners. I removed the GPUImageCrosshairGenerator altogether. The code that got this working was:
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
[stillImageSource addTarget:cornerFilter1];
[cornerFilter1 prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [cornerFilter1 imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I do not need to add the crosshairs for the purpose of my app - I simply want to parse the locations of the corners to provide some cropping, but I required the dots to be visible to check the corners were being detected correctly. I'm not sure if the white dots on black are the expected outcome of this filter, but I presume so.
Updated code for Swift 2:
let stillImageSource: GPUImagePicture = GPUImagePicture(image: image)
let cornerFilter1: GPUImageHarrisCornerDetectionFilter = GPUImageHarrisCornerDetectionFilter()
cornerFilter1.threshold = 0.1
cornerFilter1.forceProcessingAtSize(image.size)
stillImageSource.addTarget(cornerFilter1)
stillImageSource.processImage()
let tmp: UIImage = cornerFilter1.imageByFilteringImage(image)

Chroma-Filtering video using GPUImage?

I am attempting to display a video file with transparency inside my application using a transparency key (RGB: 0x00FF00, or full green) using #BradLarson's awesome GPUImage toolkit. However, I am experiencing some difficulties with the GPUImageChromaKeyFilter filter, and I don't quite understand why.
My source video file is available in my dropbox here (12 KB, 3 seconds long, full green background, just a square on the screen),
And I used the sample project titled SimpleVideoFilter.
This is the code I attempted to use (I simply replaced -viewDidLoad):
NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:#"sample" withExtension:#"m4v"];
movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
filter = [[GPUImageChromaKeyFilter alloc] init];
[filter setColorToReplaceRed:0 green:1 blue:0];
[filter setEnabled:YES];
[movieFile addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[movieFile startProcessing];
According to the documentation (which is sparse), this should have the effect of replacing all of the green in the video. Instead, I get this as an output:
Which tells me that the video is playing (and thus it's copying to the application), but it doesn't seem to be doing any chroma keying. Why would this be? Do I need to manually set smoothing values & thresholds? I shouldn't, because the source only contains two colors (0x00FF00 and 0x000000).
I have tested this on the device as well, to no avail. Almost all other filters I attempt to use work, such as GPUImageRGBFilter, GPUImageSepiaFilter, etc. Could GPUImageChromaKeyFilter just be broken?
Any help with this would be appreciated, as at this point I'm scraping the bottom of the barrel for transparency on a video.
Similar to the original question, I wanted to put a green-screen video on top of a custom view hierarchy incl. live video. Turned out this was not possible with the standard GPUImage ChromaKey filter(s). Instead of alpha blending, it blended the green pixels with the background pixels. For example a red background became yellow and blue became cyan.
The way to get it working involves two steps:
1) make sure the filterview has a transparent background:
filterView.backgroundColor=[UIColor clearColor];
2) Modify GPUImageChromaKeyFilter.m
old: gl_FragColor = vec4(textureColor.rgb, textureColor.a * blendValue);
new: gl_FragColor = vec4(textureColor.rgb * blendValue, 1.0 * blendValue);
Now all keyed (for example green) pixels in the video become transparent and uncover whatever is below the filterview, incl. (live-)video.
I got it to work by using GPUImageChromaKeyBlendFilter and setting a background image. I'd like to know though if we can do without having to set a background. When I don't set the background, the movie shows as white, but adding the background renders fine...
filter = [[GPUImageChromaKeyBlendFilter alloc] init];
[(GPUImageChromaKeyBlendFilter *)filter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[(GPUImageChromaKeyBlendFilter *)filter setThresholdSensitivity:0.4];
UIImage *inputImage = [UIImage imageNamed:#"background.png"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture addTarget:filter];
[sourcePicture processImage];

Resources