Apply GPUImage filter to part of video - ios

I want to add two filters to one video, so half of the screen shows one filter and the other half another filter. But they should be applied to the same video, just on different parts of the screen.
Is it possible to do with GPUImage? If not, what are the alternatives?

While still a little experimental, the Swift version of GPUImage has a new capability for masking filter operations on images.
Most filters (but not all at present) can use the mask property to provide an image for masking the regions of the image you want to apply a filter to. The mask image uses the alpha channel to denote the regions you want to mask off, with opaque areas being filtered and transparent ones unfiltered.

Related

GPUImage - Custom Histogram Generator

I'm trying to use GPUImage to implement a histogram into my app. The example project on the GPUImage github called FilterShowcase comes with a good histogram generator, but due to the UI design of the app I'm making I'll need to write my own custom graph to display the histogram values. Does anyone know how can I get the RGB values from the GPUImageHistogramFilter so I can pop them into my own graph?
The GPUImageHistogramFilter produces a 256x3 image where the center 256x1 line of that contains the red, green, and blue values for the histogram packed in the RGB channels. iOS doesn't support a 1-pixel-high render framebuffer, so I have to pad it out to three pixels high.
The GPUImageHistogramGenerator creates the visible histogram overlay you see in the sample applications, and it does that by taking in the 256x3 image and rendering an output image using a custom shader that colors in bars whose height depends on the input color value. It's a quick, on-GPU implementation.
If you want to do something more custom that doesn't use a shader, you can extract the histogram values using a GPUImageRawDataOutput and pulling out the RGB components of the center 256x1 line. From there, you could draw the rest of your interface overlay, although something done using Core Graphics may chew a lot of processing power to update on every frame.

Background substraction similar to Movavi

There is a software called Movavi Photo Editor which has a feature of background removal (subtraction) which works the following way: user marks areas of image which belong to an object and areas which belong to background
This actions provide clues to software how object and background look like and help to remove background of an image.
Example: https://img.movavi.com/movavi.com.12/images/how-to/en/how-to-remove-background-from-image/2.jpg
I'm interested in using similar technique in my OpenCV project for object detection. So I was wondering how this technique can be implemented in OpenCV?
I guess it works with (adaptive)regiongrowing and possibly constraints for the region growing. You should make yourself familiar with this algorithms - but the basics are selecting the background (red) to select pixel values (maybe they take the average or median of the marked pixels). Now they look in the neighborhood of the initial pixels and decide if the pixels next to the initial pixels have the same value +- a certain threshold. If they do have the same value(+-threshold), they are marked as background. Pixels inside the constraints(green border) are skipped.
You would do this in opencv using floodfill or something like this.

Edge Detection in a particular frame of entire image

I am using GPUImageSobelEdgeDetectionFilter from project GPUImage for edge detection.
My requirement is that I want to detect edges in an image but only at centre frame of 200 x 200 and rest of the image should not be touched.
There is no direct api in framework to provide CGRect for edge detection coordinates. I do have an alternate approach of cropping down the original image and passing it for Edge Detection and finally super-imposing on the original one. But this sounds like a hack to me.
Any idea if there is a direct way to do it?
Only way to do that is as you suggest, do a crop and work with the cropped image.
If you're willing to switch over to the newer GPUImage 2, this is one of the core new features in that version of the framework. Filters can be partially applied to any region of an image, leaving the remainder of the image untouched. This includes the Sobel edge detection, and the masking of the image can be done using arbitrary shapes:
To partially apply a Sobel edge detection filter, you'd set up the filter chain as normal, then set a mask to the filter. In the below, the mask is a circle, generated to match a 480x640 image:
let circleGenerator = CircleGenerator(size:Size(width:480, height:640))
edgeDetectionFilter.mask = circleGenerator
circleGenerator.renderCircleOfRadius(0.25, center:Position.center, circleColor:Color.white, backgroundColor:Color.transparent)
The area within the circle will have the filter applied, and the area outside will simply passthrough the previous pixel colors.
This uses a stencil mask to perform this partial rendering, so it doesn't slow rendering by much. Unfortunately, I've pretty much ceased my work on the Objective-C version of GPUImage, so this won't be getting backported to that older version of the framework.

Filtering out shadows when diffing frames in opencv

I am using OpenCV to process some videos where a user is placing their hands on different parts of a wall. I've selected some regions of interest and I'm currently just using cv2.absdiff on the original image of the wall with no user and the current frame to detect whether the user has their hand in a region of interest by looking at the average pixel difference. If it's above some threshold, I consider that region "activated".
The problem I'm having is that some of the video clips contain lighting and positions that result in the user casting a shadow over certain ROIs, such that they are above the threshold. Is there a good way to filter out shadows when diffing images?
OpenCV has a Mixture of Gaussian based background subtractor which also has an option to account for shadow. You can use this instead of absdiff. MOG can be a bit slow though, compared to absdiff.
Alternatively, you can convert to HSV, and check that the Hue doesn't change.
You could first detect shadow regions in the original images, and exclude them from the difference imaging part. This paper provides a simple but effective method to detect shadows in images. They explore a colour space that is invariant to shadows.

iOS: outlining the opaque parts of a partly-transparent image

I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.
To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.
My current technique is this:
Create a new UIView which has the dimensions of the original image.
Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.
It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.
So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.
Any great ideas? :)
I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.
For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.
Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...
Cheers, hope this helps.
For the sake of contributing new ideas:
A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.
You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).
That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.
Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.
You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.
instead using UIView, i suggest just push a context like following:
UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
this will be much faster than your UIView.

Resources