OpenCV Floodfill - Replace pixel color with transparent pixel - ios

I'm trying to do Floodfill on UIImage and ended up using OpenCV framework. I can replace the color with a solid color by defining the color as cv::Scalar(255,0,0). However I want the floodfill selection to be transparent.
I don't know how I can define a transparent color in OpenCV and to the best of my knowledge it's not possible and the only option is to merge the image to a transparent background. Again it doesn't make much sense to Floodfill using a solid color and then merge it with a transparent layer as the result will be the original image with solid color in the fill areas.
Please correct me if I'm wrong.
Much appreciate your help in solving this.
Cheers

You cannot define a transparent color in OpenCV since it currently do not support image with alpha channel.
However, there exists a tricky solution. You may first refer to this question to create a mask of your floodfill area. Then you can easily calculate the alpha channel from this mask.

Related

Check if a color filter is applied on an image using OpenCV

I have the following images.
Color filter vs No filter
The top image is the normal image, while the bottom image was applied a red filter on it. Is there any way to detect if a color filter was applied on an image using OpenCV?
I'm quite new to OpenCV, so I don't really know a lot about manipulating color with OpenCV.

How bright is part of UIImage?

Have a label over part of an UIImageView. If image is too bright white text can not read. Any easy way to detect how bright portion of the image?
There is no "easy" way of doing it at least for processor. For developer it is easiest to get access to raw RGBA buffer of image data and find out what is the average color. Then convert that color to HSV and check saturation to determine the brightness. You can even use GPU to make things a bit quicker; openGL should be perfect for that.
But before you get too far: The result will most likely not be what you are looking for. There are always cases that will make your text unreadable no matter what color they are. Consider you have a white text and will convert it to black once image is too bright. But then the image consists purely of black and white stripes so that every odd letter is over white stripe and the rest are on black. The text is simply unreadable.
I suggest you try with stroke, dropping shadows or adding a background. You can for instance have white text on a label and use semitransparent black background color with some layer corner radius. The background will barely be visible on all but the brightest images and text will always be readable.

How to detect transparent area in image?

i research about merge many image into a image in iphone. But i have some problem about that. I want to detect transparent areas, which has a white background. I think it's possible to get a CGRect rectangle around the area during this and after i will drag my image into transparent area, but I do not know how I can identify it. So if i detected all transparent area in this image, i will have a CGRect Array.
You can see my image:
Please help me, thank you very much!!
In terms of detecting transparent pixels, you can access the pixel buffer as described in Technical Q&A QA1509 and then iterate through the pixel buffer looking for pixels with an alpha channel value of less than 1.0.
But to extrapolate from that to the programmatic building an array of CGRect corresponding to contiguous transparent pixels is non-trivial. If you make simplifying assumptions about the nature of the transparent regions (e.g. circular), it's quite a tractable little problem, though your thin rounded rectangle that intersects many of the circles complicates the problem.
If your image with transparent areas is predefined, though, I'd probably just define them manually rather than determining it programmatically.

Image shape detection in JavaScript

I'm looking to write a script to look over a series of images that are essentially white canvas with a few black rectangles on them.
My question is this: what's the best modus operandi that would identify each of the black rectangles in turn.
Obviously I'd scan the image pixel by pixel and work out if it's colour was black or white. So far so good, Identifying and isolating each rectangle - now that's the tricky part :) A pointer in the right direction would be a great help, thank you.

iOS Performance troubles with transparency

I just generated a gradient with transparency programmatically by adding a solid color and a gradient to an image mask. I then applied the resulting image to my UIView.layer.content. The visual is fine, but when I scroll object under the transparency, the app gets chunky. Is there a way to speed up?
My minital thought was caching the resulting gradient. Another thought was to create a gradient that is only one pixel wide and stretch it to cover the desired area. Will either of these approaches help the performance?
Joe
I recall reading (though I don't remember where) that core graphics gradients can have a noticeable effect on performance. If you can, using a png for your gradient instead should resolve the issue that you are seeing.

Resources