GPUImage - Custom Histogram Generator - ios

I'm trying to use GPUImage to implement a histogram into my app. The example project on the GPUImage github called FilterShowcase comes with a good histogram generator, but due to the UI design of the app I'm making I'll need to write my own custom graph to display the histogram values. Does anyone know how can I get the RGB values from the GPUImageHistogramFilter so I can pop them into my own graph?

The GPUImageHistogramFilter produces a 256x3 image where the center 256x1 line of that contains the red, green, and blue values for the histogram packed in the RGB channels. iOS doesn't support a 1-pixel-high render framebuffer, so I have to pad it out to three pixels high.
The GPUImageHistogramGenerator creates the visible histogram overlay you see in the sample applications, and it does that by taking in the 256x3 image and rendering an output image using a custom shader that colors in bars whose height depends on the input color value. It's a quick, on-GPU implementation.
If you want to do something more custom that doesn't use a shader, you can extract the histogram values using a GPUImageRawDataOutput and pulling out the RGB components of the center 256x1 line. From there, you could draw the rest of your interface overlay, although something done using Core Graphics may chew a lot of processing power to update on every frame.

Related

Reading pixel Data in Fragment Shader

I am doing a photo Editing application in Metal iOS. I am having a texture of image. I want to have a tool when user taps the texture I want to make the tapped point(a square area around tapped point) I want to read that specific area and I want to read the color and I want to make it grayscale.
I know we can read the pixel data of texture in Kernel function. Is it possible to read the pixel data in Fragment Shader and do the above scenario.
What you are describing is the HelloCompute Metal example provided by Apple. Just download it and take a look at how a texture is rendered and how a shader can be used to convert color pixels to grayscale. The BasicTexturing example also shows how to do a plain texture render on its own.

Filtering out shadows when diffing frames in opencv

I am using OpenCV to process some videos where a user is placing their hands on different parts of a wall. I've selected some regions of interest and I'm currently just using cv2.absdiff on the original image of the wall with no user and the current frame to detect whether the user has their hand in a region of interest by looking at the average pixel difference. If it's above some threshold, I consider that region "activated".
The problem I'm having is that some of the video clips contain lighting and positions that result in the user casting a shadow over certain ROIs, such that they are above the threshold. Is there a good way to filter out shadows when diffing images?
OpenCV has a Mixture of Gaussian based background subtractor which also has an option to account for shadow. You can use this instead of absdiff. MOG can be a bit slow though, compared to absdiff.
Alternatively, you can convert to HSV, and check that the Hue doesn't change.
You could first detect shadow regions in the original images, and exclude them from the difference imaging part. This paper provides a simple but effective method to detect shadows in images. They explore a colour space that is invariant to shadows.

Colors not blending properly in OpenGL ES

I'm trying to render 2 (light) circles in OpenGL ES in 2D. The middle is white, the border is black. It works fine, as long as they don't overlap:
But as soon as they do, I get this artifact:
I'm using glBlendFunc(GL_ONE, GL_ONE) with blending enabled of course.
What could be causing this? Is there a way to fix it?
I'd like them to blend more like this:
Thanks!
Are your circles currently linear gradients? You might get less of an artifact if you have a different curve.
Based on your example, though, it looks like you want the maximum intensity of the two circles, not the sum of the intensities. It appears that Apple's OpenGL ES 2.0 implementation support the EXT_blend_minmax extension, which lets you specify that the resulting fragment values should be the maximum of the inbound and existing values. Maybe try that?
The result you're seeing is exactly what should come out for linear gradients. Hint: Open up Photoshop or The GIMP draw two radial gradients into two layers and set them to "Addition" blending mode. It will look exactly like your picture.
A effect like what you desire is given with square gradients. If your gradient is in the range 0…1 take the square of the value and draw this. You may apply a sqrt later if you want to linearize the single gradients.
Not that this is something not easily done using the blending stage; it can be done using multiple passes, but then it's actually more straightforward to use a shader to combine passed from two FBOs.

WebGL - Building objects with block

Im trying to build some text using blocks, which I intend to customize later on. The attached image is a mockup of what i intend to do.
I was thinking of using WebGL, since I want to do it in 3D and I cant do any flash, but Im not sure how to contruct the structure of cubes from the letters. Can anyone give me a suggestion or a technique to map text to a series of points so that seen from far aside they draw that same text?
First, you need a font — a table of shapes for the characters — in a format you can read from your code. Do you already have one? If it's just a few letters, you could manually create polygons for each character.
Then, use a rasterization algorithm to convert the character shape into an array of present-or-absent points/cubes. If you have perfectly flat text, then use a 2D array; if your “customizations” will create depth effects then you will want a 3D array instead (“extruding” the shape by writing it identically into multiple planes of the array).
An alternative to the previous two steps, which is appropriate if your text does not vary at runtime, is to first create an image with your desired text on it, then use the pixels of the image as the abovementioned 2D array. In the browser, you can do this by using the 2D Canvas feature to draw an image onto a canvas and then reading the pixels out from it.
Then to produce a 3D shape from this voxel array, construct a polygon face for every place in the array where a “present” point meets an “absent” point. If you do this based on pairs of neighbors, you get a chunky pixel look (like Minecraft). If you want smooth slopes (like your example image), then you need a more complex technique; the traditional way to produce a smooth surface is marching cubes (but just doing marching cubes will round off all your corners).

iOS: outlining the opaque parts of a partly-transparent image

I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.
To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.
My current technique is this:
Create a new UIView which has the dimensions of the original image.
Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.
It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.
So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.
Any great ideas? :)
I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.
For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.
Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...
Cheers, hope this helps.
For the sake of contributing new ideas:
A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.
You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).
That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.
Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.
You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.
instead using UIView, i suggest just push a context like following:
UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
this will be much faster than your UIView.

Resources