How to apply chroma key filter with any color to live camera feed ios? - ios

Basically I want to apply chroma key filter to ios live camera feed but I want user to pick the color which will be replaced by another color.
I found some examples using green screen but I don't know how to replace color dynamically instead of just green color.
Any idea how can I achieve that with best performance?

You've previously asked about my GPUImage framework, so I assume that you're familiar with it. Within that framework are two filters, a GPUImageChromaKeyFilter and a GPUImageChromaKeyBlendFilter. Both will key off of whatever color you specify via the -setColorToReplaceRed:green:blue: method, with a threshold set using the thresholdSensitivity property.
The former filter merely turns areas matching the color within the threshold to an alpha of 0, with the latter actually doing a blend against another image or video source based areas of the input image or video that match. The FilterShowcase example application shows how to do this for green, but you can set the keying color to anything you want.

Related

CIFilters for compositing Red, Green, and Blue color channels

I am building a "Curves" editor for an image and would like to split out each color channel to run through the CIToneCurve filter before compositing them back together into a single color image. (I'm aware of the CIColorCurves filter, but that doesn't give me the control I want.)
I am able to separate the channels using three separate CIColorCube filters to generate the 3 separate color channels, but I'm not sure how to put them back together to form a single color image.
Using the maximumCompositingFilter and minimumCompositing filters works, but when I run the individual color photos through the ToneCurve, adjusting the highs or the lows (depending on which compositing filter I used) messes up the colors.
You could do this with Accelerate.vImage.
Apple has an article that discusses converting an interleaved image to separate planar buffers: https://developer.apple.com/documentation/accelerate/optimizing_image_processing_performance
...and there's an article that discusses vImage / Core Image interoperability using CIImageProcessorKernel: https://developer.apple.com/documentation/accelerate/reading_from_and_writing_to_core_video_pixel_buffers. I can't remember if CIImageProcessorKernel supports single channel 8-bit images such as R8.
...also, this Apple sample code project may be of interest: Applying Tone Curve Adjustments to Images.
Ended up using the suggestion posted by Frank Schlegel and using simple additive compositing. I had to write my own CIFilter to do that, but it was quite simple.
half4 rgbaComposite(sample_h redColor, sample_h greenColor, sample_h blueColor, sample_h alphaColor) {
return half4(redColor.r, greenColor.g, blueColor.b, alphaColor.a);
}
This is for a metal backed CIFilter. Each input assumes that it only contains a single color channel.

Highcharts: backgroundColour "layers"?

At present I'm generating a chart based on a bunch of user-selected options, and this is rendered on the server to generate a png output file. The generated png chart is then displayed on the user's system, over an underlying system background.
Where the plotBackgroundColor of the chart has some opacity, the user's underlying system background will of course show through, and will influence how the chart appears.
That's all fine, because the user has complete control over both the highcharts plotBackgroundColor and the system background colour.
But now I want to generate a chart that is "free standing", with a solid background colour (no opacity) which represents exactly how the chart appears when over the system background. That way, I can display that chart on any system, to give a true picture of what the user is seeing, regardless of the system background colour in the target device.
I do have access to the user's system background colour (it's either a bitmap, or I can just extract the "dominant" colour somehow and use that as a solid colour instead if it's easier).
So using the concept of layers, this would be like merging the highcharts plotBackgroundColor with a solid colour that represents the system background colour, and using that as plotBackgroundColor instead.
Or maybe there's a way to change an underlying background "browser" colour that is used in the highcharts renderer, independent of plotBackgroundColor?
I'm sure this must be possible somehow?
One way of doing this, and it's what I'm doing until someone posts a better answer, is just to manually combine each backgroundColor value with the underlying system canvas colour, using the method described at https://stackoverflow.com/a/10782314/4070848. This is basically just laying one colour over the other, and using an algorithm to determine the combined colour based on the respective opacities and rgb values.
I check whether the backgroundColor is a plain colour or a linear/radial gradient, and if the latter then I combine each of the stop colours separately with the underlying colour and reconstruct the gradient based on the merged stop colours.
Seems to work OK, but maybe there's an off-the-shelf method, or maybe someone can tell me how to do it better...!

Coloring Shapes in iOS app

We are building an iPad kids application in which a kid is requested to color different shapes with specific color. For example, consider an image with sky and trees , etc. all overlapping and a kid has selected a color for example "Blue" and then he taps the sky , the sky should turn to blue otherwise it should say "wrong color"
My questions are:
1- How to implement the coloring of the sky only with the selected color. We have implemented a Coco2d Floodfilling but it is too slow.
2- How to tie each part of the image with a specific right color. We suggested loading a fully colored image in a background layer and testing it at the tap .... BUT how to implement it.
Thanks
Are the shapes originally vectorial? If so, a solution would be to work directly with them as vectors, parsing them into CoreAnimation shapes.
You can give a try to SVGKit or get some inspiration from it. You'll get CAShapeLayers where you can change the fillColor property.
I believe that this way would be much more responsive (and the app size much more lighter) than doing tricks with images ;-)

How can I change hue of an UIImage programmatically only in few parts?

How can I change hue of an UIImage programmatically only in few parts? I have followed this link
How to programmatically change the hue of UIImage?
and used the same code in my application. It's working fine but the complete image hue is getting changed. According to my requirement I want to change only the tree color in the above snap. How can I do that?
This is a specific case of a more general problem of using masking. I assume you have some way of knowing what pixels are in the "tree" part, and which ones are not. (If not, that's a whole other question/problem).
If so, first draw the original to the result context, then create a mask (see here: http://mobiledevelopertips.com/cocoa/how-to-mask-an-image.html), and draw the changed-hue version with the mask representing the tree active.
I recommend you take a look at the CoreImage API and the CIColorCube or CIColorMap filter in particular. Now how to define the color cube or color map is where the real magic lies. You'll need to transform tree tones (browns, etc), though this will obviously transform all browns, not just your tree.

Best approach for customizing image with text on ios

I have a project to customize clothes ,let say a t-shirt, that have following features:
change colors.
add few lines of text ( <= 4) and change the font from a list.
add image or photo to the t-shirt.
rotate the t-shirt to custom back side.
rotate the image and zoom in/out.
save the result as a project locally and send it to a webservice ( i think to use NSDictionary/json ).
save as an image.
so my question is :
Should I use multiples images to simulate colors changes. Or should I use QuartzCore ( I am not an expert in QuartzCore but if I have to use it I'll learn). Or is there a better approach for this ?
Thank you.
The simple way to do this is to render the T-Shirt image into a CGContext, then walk the rows and columns and change pixels showing a "strong" primary color to the desired tint. You would take a photo of a person wearing a bright red (or other primary color) t-shirt, then in your code only change pixels where the red color has a high luminance and saturation (i.e. the "r" value is over some threshold and the b and g components are low).
The modified image is then going to look a bit flat, as when you change the pixels to one value (the new tint) there will be no variation in luminance. To make this more realistic, you would want to make each pixel have the same luminance as it did before. You can do this by converting back and forth from RGB to a color space like HCL. Apple has a great doc on color (in the Mac section) that explains color spaces (google 'site:developer.apple.com "Color Spaces"')
To reach your goal, you will have to tackle these technologies:
create a CGContext and render an image into it using Quartz
figure out how to read each pixel (pixels can have alpha and different orderings)
figure out a good way to identify the proper pixels (test by making these black or white)
for each pixel you want to change, convert the RGB to HCL to get its luminance
replace the pixel with a pixel of a different Color and Hue but the same Luminence
use the CGContext to make a new image
If all this seems to difficult then you'll have to have different images for every color you want.

Resources