How does Photoshop Camera RAW/Lightroom's Color Calibration Tool Work? - image-processing

I want to try and reverse engineer the camera calibration panel in the camera raw filter in Photoshop/Lightroom.
Photoshop Colour Calibration Tool
It can create some pretty cool effects, so I want to write a program what will help automate these effects. I've attempted to try and figure out how it works, it seems to work differently from the HSL colour adjustment methods in that just moving the "Blue Primary" slider seems to affect all colours not just the blue hues (it even affects some colours that begin as solid red).
I've tried to graph out the sort of function this would do, since it seems to do something along the lines of shifting the hue of the actual blue colour in RGB to be whatever you shift the hue by, but I'm not sure what this actually means.
Here's an unmodified graph of hues relating to RGB values.
Here's the same graph, but by shifting the blue primary hues all the way to the left.
I know it's doing more than just hue shifting, since just running the filter on a hue spectrum with L/S both at 100% seems to actually change the lightness and saturation on some of the hues, see images linked below for an example.
Regular Hue Spectrum
Hue Spectrum with Blue Primary slider all the way to the left.
Is there any other open source software that does something like this that I can look to for code, or possibly an idea of how this actually works under the hood?

I figured it out (at least what I believe they're doing). So if anyone else has the same question, what they're doing is by taking advantage of the chromaticity coordinates for each of the RGB colours in the RGB -> XYZ colour space conversion. So when shifting the hue of the blue coordinate, I think they're first just shifting the hue of the blue in HSL, then taking that colour of the shifted hue, converting it into XYZ, then projecting the XYZ onto XY to get the chromaticity coordinate for the shifted blue. Then to apply that to an image, just converting from RGB to XYZ with the shifted coordinate, and converting back into RGB with the unshifted XYZ conversion matrix.

Related

GPUImage Histogram Equalization

I would like to use GPUImage's Histogram Equalization filter (link to .h) (link to .m) for a camera app. I'd like to use it in real time and present it as an option to be applied on the live camera feed. I understand this may be an expensive operation and cause some latency.
I'm confused about how this filter works. When selected in GPUImage's example project (Filter Showcase) the filter shows a very dark image that is biased toward red and blue which does not seem to be the way equalization should work.
Also what is the difference between the histogram types kGPUImageHistogramLuminance and kGPUImageHistogramRGB? Filter Showcase uses kGPUImageHistogramLuminance but the default in the init is kGPUImageHistogramRGB. If I switch Filter Showcase to kGPUImageHistogramRGB, I just get a black screen. My goal is an overall contrast optimization.
Does anyone have experience using this filter? Or are there current limitations with this filter that are documented somewhere?
Histogram equalization of RGB images is done using the Luminance as equalizing the RGB channels separately would render the colour information useless.
You basically convert RGB to a colour space that separates colour from intensity information. Then equalize the intensity image and finally reconvert it to RGB.
According to the documentation: http://oss.io/p/BradLarson/GPUImage
GPUImageHistogramFilter: This analyzes the incoming image and creates
an output histogram with the frequency at which each color value
occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide
image with the center (vertical) pixels containing pixels that
correspond to the frequency at which various color values occurred.
Each color value occupies one of the 256 width positions, from 0 on
the left to 255 on the right. This histogram can be generated for
individual color channels (kGPUImageHistogramRed,
kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the
image (kGPUImageHistogramLuminance), or for all three color channels
at once (kGPUImageHistogramRGB).
I'm not very familiar with the programming language used so I can't tell if the implementation is correct. But in the end, colours should not change too much. Pixels should just become brighter or darker.

Perlin noise, how to detect bright/dark areas?

I need some help with perlin noise.
I want to create a random terrain generator. I am using perlin noise in order to determine where the mountains and sea should go. From the random noise, I get something like this:
http://prntscr.com/df0rqp
Now, how can I actually detect where the brighter and darker areas are?
I tried using display.colorSample which returns the RGB color and alpha of the pixel, but this doesn't really help me much.
If it was only white and red, I could easily detect where the bright area is (white would be really big, where red would be small number) and the opposite.
However, since I have red, GREEN AND BLUE, this makes it a hard job.
To sum up, how can I detect where the white and where the red areas at?
You have a fundamental misunderstanding here. The perlin noise function really only goes from (x,y)->p . [It also works in higher dimensions]. But what you are seeing is just your library being nice. The noise function goes from two reals to one. It is being helpful by mapping the one result value p to a color gradient. But that is only for visualization. p is not a color, just another number. Use that one directly! If p<0 you might do water.
I would suggest this:
1. Shift hue of the image into red color like here
2. Use red channel to retrieve some mask.
3. Optional: scale max/min brightness into 0-255 range.

subtract one color from another in RGB color space

I would like to subtract color from another. For example, I have two image 100X100 pixel, one with color R:236 G:226 B:43, and another R:63 G:85 B:235. I would like to cut color R:236 G:226 B:43 from R:63 G:85 B:235. But I know it can't subtract like the mathematically method, by layer R:236-63, G:226-85, B:43-235 because i found that the color that less than 0 and more than 255 can't define.
I found another color space in RYB color space.but i don't know how it really work.
Thank you for your help.
You cannot actually subtract colors. But you surely can detect their difference. I suppose this is what you need, anyway.
Here are some thoughts and remarks:
Convert your images to HSV colorspace which transforms RGB values to
Hue, Saturation and Brightness (Value).
All your images should be around a yellowish color (near 60 deg. on
the Hue circle) so they should all have about the same Hue with
minor differences.
Typically if all images are taken at constant lighting conditions
they should have the same Value (brightness).
Saturation, which corresponds to the mixture of white in a color,
typically represents how intense you perceive a color to be. This
would typically be of about the same value for all your images in
constant lighting conditions.
According to your first description, the main difference should be detected in the Hue channel.
A good thing about HSV is that H (hue) is represented by a counterclockwise circle and colors are just positions on this circle, so positive and negative values all make sense (search google for a description of HSV colorspace to get a view of how it looks and works).
You may either detect differences by a subtraction that will lead you to a value either positive either negative, or by taking the absolute value of the subtraction, which will just give a measure of the difference of the two values of Hue (but without any information on the direction of the difference). If you need the direction of the difference you should just stick to a plain subtraction.
For example:
Hue_1 - Hue_2 = Hue_3 (typically a small value for your problem)
if Hue_3 > 0 this means that Hue_1 is a bit towards Green if
Hue_3 < 0 this means that Hue_1 is a bit towards Red
Of course you may also need to take a look at the differences in the other channels, S and V to see if colors are more saturated or more bright, but I cannot be sure you need to do this since we haven't seen any images here.
Of course you can do a lot more sophisticated things...Like apply clustering or classification techniques on the detected hues and classify them to classes according to your problem needs...

Algorithm for determining the prominant colour of a photograph

When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion

How can I generate multiple shades from a given base color?

I'd like design a chart and set the colors
from a single exemplar. Same way as in Excel's:
Is there some sort of a formula or algorithm to
generate the next shade of color from a given
shade or color?
That looks to me like they just took the same hue (basic color) and turned the brightness up and down. That can be done easily enough with a HSL or HSV transformations. Check Wikipedia for HSL and HSV color spaces to get some understanding of the theory involved.
Basic idea: Computers represent color with a mixture of red intensity, green intensity and blue intensity, called RGB, because that's the way the screen displays color. HSL (Hue, Saturation, Lightness) and HSV (Hue, Saturation, Value) are two alternative models for representing color that are more intuitive and closer to the way human beings tend to think about how colors look.
Hue is the basic color, represented (more or less) as an angle on a color wheel. Saturation is a linear value, from 0 (gray) to 255 (bright, vibrant color). And Lightness/Value represent brightness, from 0 (black) to 100 (white).
The algorithms to transform from RGB -> HSL and HSL -> RGB (or HSV instead of HSL) are pretty straightforward. Try transforming your color to HS*, adjusting the brightness, and transforming back. By taking several different brightness values from low to high, and arranging them as wedges in a pie chart, you can duplicate that picture pretty easily.
Look into the HSV colour space. Using it you can produce different shades or tints starting from a given colour. There is a page with Pascal / Delphi code for conversion between RGB and HSV at efg's Computer Lab.
Roderick , the #mghie links are great to start, additionally try out the Colorlib Delphi Library , wich lets you convert between color models as well as HTML color conversion utilities. is very complete, full source code included and freeware ;).
check the demo application , in this image you can see a blue pallete generated using this library.

Resources