iOS: outlining the opaque parts of a partly-transparent image - ios

I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.
To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.
My current technique is this:
Create a new UIView which has the dimensions of the original image.
Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.
It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.
So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.
Any great ideas? :)

I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.
For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.
Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...
Cheers, hope this helps.

For the sake of contributing new ideas:
A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.
You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).
That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.
Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.

You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.

instead using UIView, i suggest just push a context like following:
UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
this will be much faster than your UIView.

Related

Improve an image mask to be solid with OpenCV

I'm using OpenCV to merge multiple frames of a single video into a single image based on a movement mask. The video is taken by a moving phone with slight hand movement. I was able to align the frames (using feature matching), calculate the background (median), and estimate the movement mask (using BackgroundSubtractorMOG2) but the mask doesn’t give me the perfect movement body shape, and instead, it has “holes”. I’m using that mask to copy pixels from the source frame to the calculated background and not happy with the result because the image has the same whole as the mask. It's fine if the saturated mask is not precise, because all the frames are aligned, I don't mind take a little bit of extra of the source image with the saturated mask.
Is there a good way to do the following mask improvements using OpenCV?
UPDATE:
Trying to apply dilatation and noise reduction has the following result. It's not perfect but acceptable. With better noise control/reduction I feel it's possible to fill the largest contour although I still have some empty areas.
And some other example, when the whole object is in the scene, I really want no holes in a person

Edge Detection in a particular frame of entire image

I am using GPUImageSobelEdgeDetectionFilter from project GPUImage for edge detection.
My requirement is that I want to detect edges in an image but only at centre frame of 200 x 200 and rest of the image should not be touched.
There is no direct api in framework to provide CGRect for edge detection coordinates. I do have an alternate approach of cropping down the original image and passing it for Edge Detection and finally super-imposing on the original one. But this sounds like a hack to me.
Any idea if there is a direct way to do it?
Only way to do that is as you suggest, do a crop and work with the cropped image.
If you're willing to switch over to the newer GPUImage 2, this is one of the core new features in that version of the framework. Filters can be partially applied to any region of an image, leaving the remainder of the image untouched. This includes the Sobel edge detection, and the masking of the image can be done using arbitrary shapes:
To partially apply a Sobel edge detection filter, you'd set up the filter chain as normal, then set a mask to the filter. In the below, the mask is a circle, generated to match a 480x640 image:
let circleGenerator = CircleGenerator(size:Size(width:480, height:640))
edgeDetectionFilter.mask = circleGenerator
circleGenerator.renderCircleOfRadius(0.25, center:Position.center, circleColor:Color.white, backgroundColor:Color.transparent)
The area within the circle will have the filter applied, and the area outside will simply passthrough the previous pixel colors.
This uses a stencil mask to perform this partial rendering, so it doesn't slow rendering by much. Unfortunately, I've pretty much ceased my work on the Objective-C version of GPUImage, so this won't be getting backported to that older version of the framework.

How to detect transparent area in image?

i research about merge many image into a image in iphone. But i have some problem about that. I want to detect transparent areas, which has a white background. I think it's possible to get a CGRect rectangle around the area during this and after i will drag my image into transparent area, but I do not know how I can identify it. So if i detected all transparent area in this image, i will have a CGRect Array.
You can see my image:
Please help me, thank you very much!!
In terms of detecting transparent pixels, you can access the pixel buffer as described in Technical Q&A QA1509 and then iterate through the pixel buffer looking for pixels with an alpha channel value of less than 1.0.
But to extrapolate from that to the programmatic building an array of CGRect corresponding to contiguous transparent pixels is non-trivial. If you make simplifying assumptions about the nature of the transparent regions (e.g. circular), it's quite a tractable little problem, though your thin rounded rectangle that intersects many of the circles complicates the problem.
If your image with transparent areas is predefined, though, I'd probably just define them manually rather than determining it programmatically.

iOS Performance troubles with transparency

I just generated a gradient with transparency programmatically by adding a solid color and a gradient to an image mask. I then applied the resulting image to my UIView.layer.content. The visual is fine, but when I scroll object under the transparency, the app gets chunky. Is there a way to speed up?
My minital thought was caching the resulting gradient. Another thought was to create a gradient that is only one pixel wide and stretch it to cover the desired area. Will either of these approaches help the performance?
Joe
I recall reading (though I don't remember where) that core graphics gradients can have a noticeable effect on performance. If you can, using a png for your gradient instead should resolve the issue that you are seeing.

Distort image to make raindrop on screen effect

I want to make an image appear distorted as if raindrops are on the screen. Image of water droplet effect over check pattern http://db.tt/fQkx9bzh
Any idea how I could do this using OpenGL or CoreImage?
I am able get an image with the depth of the raindrop shapes if that helps. Otherwise, I'm really not sure how to do this especially as they are not perfectly circular and I have almost no experience with OpenGL or Core Image (although I can set up the buffers and stuff and do some simple drawing).
I'd use the elevation of the drop (that is, distance from the surface) as control texture for a Bulge effect. Use the barycenter of the drop as the centerpoint for the effect.

Resources