How do I reduce a bitmap to a known set of RGB colours - image-processing

For a hobby project I'm going to build a program that when given an image bitmap will create a cross-stitch pattern as a PDF. I'll be using Cocoa/Objective C on a Mac.
The source bitmap will typically be a 24bpp image, but of the millions of colours available, only a few exist as cross-stitch threads. Threads come in various types. DMC is the most widely available, and almost their entire range is available as RGB values from various web sites. Here's one, for instance.
DMC# Name R G B
----- ------------------ --- --- ---
blanc White 255 255 255
208 Lavender - vy dk 148 91 128
209 Lavender - dk 206 148 186
210 Lavender - md 236 207 225
211 Lavender - lt 243 218 228
...etc...
My first problem, as I see it, is from a starting point of the RGB from a pixel in the image choosing the nearest colour available from the DMC set. What's the best way of finding the nearest DMC colour mathematically, and ensuring that it's a close fit as a colour too?
Although I'll be using Cocoa, feel free to use pseudo-code (or even Java!) in any code you post.

Use the LAB color space and find the color with the nearest euclidean distance. Doing this in the RGB color space will yield counter-intuitive results. (Or use the HSL color space.)
So just iterate over each pixel and find the color with the closest distance within the color space you choose. Note that the distance must be computed circularly for some color spaces (e.g. those employing hue).
(Most color quanization revolves around actually choosing a palette, but that has already been taken care of in your case, so you can't use the more popular quantization techniques.)
Also, check out this question.
To find the HSB hue in Cocoa, it looks like you can use the getHue method declared in NSColor.h.
However, if you just convert an image to a cross-stitch design using this technique, it will be very hard to actually stitch it. It will be full of single-pixel color fields, which sort of defeats the purpose of cross-stitching.

This is called color quantization, and there are many algorithms available.
One very basic is to just treat RGB colors as points in space, and use plain old Euclidian distance between colors to figure out how "close" they are. This has drawbacks, since human eyes have different sensitivity at different places in this space, so such a distance would not correspond well to how humans perceive the colors. You can use various weighting schemes to improve that situation.

Interresting... :)
You would not only identify the nearest colors, you would also want to reduce the number of colors used. You don't want to end up with a stitching pattern that uses hundreds of different colors...
I put together some code that does this on a basic level. (Sorry that it's in C#, I hope that it can be somewhat useful anyway.)
There is some further tweaking that needs to be done before the method works well, of course. The GetDistance method weights the importance of hue, saturation and brightness against each other, finding the best balance between those is of course important in order to find the color that looks closest.
There is also a lot that can be done with the method of reducing the palette. In the example I just picked the most used colors, but you probably want to weight in how alike the colors are in the palette. This can be done by picking the most used color, reduce the count for the remaining colors in the list depending on the distance to the picked color, and then resort the list.
The Hsl class that holds a DMC color, can calculate the distance to another color, and find the nearest color in a list of colors:
public class Hsl {
public string DmcNumber { get; private set; }
public Color Color { get; private set; }
public float Hue { get; private set; }
public float Saturation { get; private set; }
public float Brightness { get; private set; }
public int Count { get; set; }
public Hsl(Color c) {
DmcNumber = "unknown";
Color = c;
Hue = c.GetHue();
Saturation = c.GetSaturation();
Brightness = c.GetBrightness();
Count = 0;
}
public Hsl(string dmc, int r, int g, int b)
: this(Color.FromArgb(r, g, b))
{
DmcNumber = dmc;
}
private static float AngleDifference(float a1, float a2) {
float a = Math.Abs(a1 - a2);
if (a > 180f) {
a = 360f - a;
}
return a / 180f;
}
public float GetDistance(Hsl other) {
return
AngleDifference(Hue, other.Hue) * 3.0f +
Math.Abs(Saturation - other.Saturation) +
Math.Abs(Brightness - other.Brightness) * 4.0f;
}
public Hsl GetNearest(IEnumerable<Hsl> dmcColors) {
Hsl nearest = null;
float nearestDistance = float.MaxValue;
foreach (Hsl dmc in dmcColors) {
float distance = GetDistance(dmc);
if (distance < nearestDistance) {
nearestDistance = distance;
nearest = dmc;
}
}
return nearest;
}
}
This code sets up a (heavily reduced) list of DMC colors, loads an image, counts the colors, reduces the palette and converts the image. You would of course also want to save the information from the reduced palette somewhere.
Hsl[] dmcColors = {
new Hsl("blanc", 255, 255, 255),
new Hsl("310", 0, 0, 0),
new Hsl("317", 167, 139, 136),
new Hsl("318", 197, 198, 190),
new Hsl("322", 81, 109, 135),
new Hsl("336", 36, 73, 103),
new Hsl("413", 109, 95, 95),
new Hsl("414", 167, 139, 136),
new Hsl("415", 221, 221, 218),
new Hsl("451", 179, 151, 143),
new Hsl("452", 210, 185, 175),
new Hsl("453", 235, 207, 185),
new Hsl("503", 195, 206, 183),
new Hsl("504", 206, 221, 193),
new Hsl("535", 85, 85, 89)
};
Bitmap image = (Bitmap)Image.FromFile(#"d:\temp\pattern.jpg");
// count colors used
List<Hsl> usage = new List<Hsl>();
for (int y = 0; y < image.Height; y++) {
for (int x = 0; x < image.Width; x++) {
Hsl color = new Hsl(image.GetPixel(x, y));
Hsl nearest = color.GetNearest(dmcColors);
int index = usage.FindIndex(h => h.Color.Equals(nearest.Color));
if (index != -1) {
usage[index].Count++;
} else {
nearest.Count = 1;
usage.Add(nearest);
}
}
}
// reduce number of colors by picking the most used
Hsl[] reduced = usage.OrderBy(c => -c.Count).Take(5).ToArray();
// convert image
for (int y = 0; y < image.Height; y++) {
for (int x = 0; x < image.Width; x++) {
Hsl color = new Hsl(image.GetPixel(x, y));
Hsl nearest = color.GetNearest(reduced);
image.SetPixel(x, y, nearest.Color);
}
}
image.Save(#"d:\temp\pattern.png", System.Drawing.Imaging.ImageFormat.Png);

get the source for the ppmquant application from the netpbm set of utilities

Others have pointed out various techniques for color quantization. It's possible to use techniques like Markov Random Fields to try to penalize the system for switching thread colors at neighboring pixel locations. There are some generic multi-label MRF libraries out there including Boykov's.
To use one of these, the data elements would be the input colors, the labels would be the set of thread colors, the data terms could be something like the Euclidean distance in LAB space suggested by bzlm, and the neighborhood terms would penalize for switching thread colors.

Depending on the relevance of the correctness of your color operations, remember to take color spaces into account. While I have studied this somewhat, due to my photography hobby, I'm still a bit confused about everything.
But, as previously mentioned, use LAB as much as possible, because (afaik) it's color space agnostic, while all other methods (RGB/HSL/CMYK) mean nothing (in theory) without a defined color space.
RGB, for example, is just three percentage values (0-255 => 0-100%, with 8-bit color depth). So, if you have an RGB-triplet of (0,255,0), it translates to "only green, and as much of it as possible". So, the question is "how red is red?". This is the question that a color space answers - sRGB 100%-green is not as green as AdobeRGB 100%-green. It's not even the same hue!
Sorry if this went to the offtopic side of things

Related

UIColorPickerViewController is changing the input color slightly

I need to allow the user to choose a color on iOS.
I use the following code to fire up the color picker:
var picker = new UIColorPickerViewController();
picker.SupportsAlpha = true;
picker.Delegate = this;
picker.SelectedColor = color.ToUIColor();
PresentViewController(picker, true, null);
When the color picker displays, the color is always slightly off. For example:
input RGBA: (220, 235, 92, 255)
the initial color in the color picker might be:
selected color: (225, 234, 131, 255)
(these are real values from tests). Not a long way off... but enough to notice if you are looking for it.
I was wondering if the color picker grid was forcing the color to the
nearest color entry - but if that were true, you would expect certain colors to
stay fixed (i.e. if the input color exactly matches one of the grid colors,
it should stay unchanged). That does not happen.
p.s. I store colors in a cross platform fashion using simple RGBA values.
The ToUIColor converts to local UIColor using
new UIColor((nfloat)rgb.r, (nfloat)rgb.g, (nfloat)rgb.b, (nfloat)rgb.a);
From the hints in comments by #DonMag, I've got some way towards an answer, and also a set of resources that can help if you are struggling with this.
The key challenge is that mac and iOS use displayP3 as the ColorSpace, but most people use default {UI,NS,CG}Color objects, which use the sRGB ColorSpace (actually... technically they are Extended sRGB so they can cover the wider gamut of DisplayP3). If you want to know the difference between these three - there's resources below.
When you use the UIColorPickerViewController, it allows the user to choose colors in DisplayP3 color space (I show an image of the picker below, and you can see the "Display P3 Hex Colour" at the bottom).
If you give it a color in sRGB, I think it gets converted to DisplayP3. When you read the color, you need to convert back to sRGB, which is the step I missed.
However I found that using CGColor.CreateByMatchingToColorSpace, to convert from DisplayP3 to sRGB never quite worked. In the code below I convert to and from DisplayP3 and should have got back my original color, but I never did. I tried removing Gamma by converting to a Linear space on the way but that didn't help.
cg = new CGColor(...values...); // defaults to sRGB
// sRGB to DisplayP3
tmp = CGColor.CreateByMatchingToColorSpace(
CGColorSpace.CreateWithName("kCGColorSpaceDisplayP3"),
CGColorRenderingIntent.Default, cg, null);
// DisplayP3 to sRGB
cg2 = CGColor.CreateByMatchingToColorSpace(
CGColorSpace.CreateWithName("kCGColorSpaceExtendedSRGB"),
CGColorRenderingIntent.Default, tmp, null);
Then I found an excellent resource: http://endavid.com/index.php?entry=79 that included a set of matrices that can perform the conversions. And that seems to work.
So now I have extended CGColor as follows:
public static CGColor FromExtendedsRGBToDisplayP3(this CGColor c)
{
if (c.ColorSpace.Name != "kCGColorSpaceExtendedSRGB")
throw new Exception("Bad color space");
var mat = LinearAlgebra.Matrix<float>.Build.Dense(3, 3, new float[] { 0.8225f, 0.1774f, 0f, 0.0332f, 0.9669f, 0, 0.0171f, 0.0724f, 0.9108f });
var vect = LinearAlgebra.Vector<float>.Build.Dense(new float[] { (float)c.Components[0], (float)c.Components[1], (float)c.Components[2] });
vect = vect * mat;
var cg = new CGColor(CGColorSpace.CreateWithName("kCGColorSpaceDisplayP3"), new nfloat[] { vect[0], vect[1], vect[2], c.Components[3] });
return cg;
}
public static CGColor FromP3ToExtendedsRGB(this CGColor c)
{
if (c.ColorSpace.Name != "kCGColorSpaceDisplayP3")
throw new Exception("Bad color space");
var mat = LinearAlgebra.Matrix<float>.Build.Dense(3, 3, new float[] { 1.2249f, -0.2247f, 0f, -0.0420f, 1.0419f, 0f, -0.0197f, -0.0786f, 1.0979f });
var vect = LinearAlgebra.Vector<float>.Build.Dense(new float[] { (float)c.Components[0], (float)c.Components[1], (float)c.Components[2] });
vect = vect * mat;
var cg = new CGColor(CGColorSpace.CreateWithName("kCGColorSpaceExtendedSRGB"), new nfloat[] { vect[0], vect[1], vect[2], c.Components[3] });
return cg;
}
Note: there's lots of assumptions in the matrices w.r.t white point and gammas. But it works for me. Let me know if there are better approaches out there, or if you can tell me why my use of CGColor.CreateByMatchingToColorSpace didn't quite work.
Reading Resources:
Reading this: https://stackoverflow.com/a/49040628/6257435
then this: https://bjango.com/articles/colourmanagementgamut/
are essential starting points.
Image of the iOS Color Picker:

Efficiently tell if one image is entirely comprised of the pixel values of another in OpenCV

I am trying to find an efficient way to see if one image is a subset of another (meaning that each unique pixel in one image is also found in the other.) The repetition or ordering of the pixels do not matter.
I am working in Java, so I would like all of my operations to be completed in OpenCV for efficiency's sake.
My first idea was to export a list of unique pixel values, and compare it to the list from the second image.
As there is not a built in function to extract unique pixels, I abandoned this approach.
I also understand that I can find the locations of a particular color with the inclusive inRange, and findNonZero operations.
Core.inRange(image, color, color, tempMat); // inclusive
Core.findNonZero(tempMat, colorLocations);
Unfortunately, this does not provide an adequate answer, as it would need to be executed per color, and would still require extracting unique pixels.
Essentially, I'm asking if there is a clever way to use the built in OpenCV functions to see if an image is comprised of the pixels found in another image.
I understand that this will not work for slight color differences. I am working on a limited dataset, and care about the exact pixel values.
To put the question more mathematically:
Because the only think you are interested in is the pixel values i would suggest to do the following.
Compute the histogram of image 1 using hist1 = calcHist()
Compute the histogram of image 2 using hist2 = calcHist()
Calculate the difference vector diff = hist1 - hist2
Check if each bin of the hist of the subimage is less or equal than the corresponding bin in the hist of the bigger image
Thanks to Miki for the fix.
I will keep Amitay's as the accepted answer, as he absolutely lead me down the correct path. I wanted to also share my exact answer for anyone who finds this in the future.
As I stated in my question, I was looking for an efficient way to see if the RGB values of one image were a subset of the RGB values of another image.
I made a function to the following specification:
The Java code is as follows:
private boolean isSubset(Mat subset, Mat subMask, Mat superset) {
// Get unique set of pixels from both images
subset = getUniquePixels(subset, subMask);
superset = getUniquePixels(superset, null);
// See if the superset pixels encapsulate the subset pixels
// OR the unique pixels together
Mat subOrSuper = new Mat();
Core.bitwise_or(subset, superset, subOrSuper);
//See if the ORed matrix is equal to the superset
Mat notEqualMat = new Mat();
Core.compare(superset, subOrSuper, notEqualMat, Core.CMP_NE);
return Core.countNonZero(notEqualMat) == 0;
}
subset and superset are assumed to be CV_8UC3 matricies, while subMask is assumed to be CV_8UC1.
private Mat getUniquePixels(Mat img, Mat mask) {
if (mask == null) {
mask = new Mat();
}
// int bgrValue = (b << 16) + (g << 8) + r;
img.convertTo(img, CvType.CV_32FC3);
Vector<Mat> splitImg = new Vector<>();
Core.split(img, splitImg);
Mat flatImg = Mat.zeros(img.rows(), img.cols(), CvType.CV_32FC1);
Mat multiplier;
for (int i = 0; i < splitImg.size(); i++) {
multiplier = Mat.ones(img.rows(), img.cols(), CvType.CV_32FC1);
// set powTwo = to 2^i;
int powTwo = (1 << i);
// Set multiplier matrix equal to powTwo;
Core.multiply(multiplier, new Scalar(powTwo), multiplier);
// n<<i == n * 2^i;
// I'm shifting the RGB values into separate parts of the same 32bit
// integer.
Core.multiply(multiplier, splitImg.get(i), splitImg.get(i));
// Add the shifted RGB components together.
Core.add(flatImg, splitImg.get(i), flatImg);
}
// Create a histogram of the pixel values.
List<Mat> images = new ArrayList<>();
images.add(flatImg);
MatOfInt channels = new MatOfInt(0);
Mat hist = new Mat();
// 16777216 == 256*256*256
MatOfInt histSize = new MatOfInt(16777216);
MatOfFloat ranges = new MatOfFloat(0f, 16777216f);
Imgproc.calcHist(images, channels, mask, hist, histSize, ranges);
Mat uniquePixels = new Mat();
Core.inRange(hist, new Scalar(1), new Scalar(Float.MAX_VALUE), uniquePixels);
return uniquePixels;
}
Please feel free to ask questions, or point out problems!

Compare multiple Image Histograms with Processing

picture histogram
I'm quite new to the processing language. I am trying to create an image comparison tool.
The idea is to get a histogram of a picture (see screenshot below, size is 600x400), which is then compared to 10 other histograms of similar pictures (all size 600x400). The histogram shows the frequency distribution of the gray levels with the number of pure black values displayed on the left and number of pure white values on the right.
In the end I should get a "winning" picture (the one that has the most similar histogram).
Below you can see the code for the image histogram, similar to the processing tutorial example.
My idea was to create a PImage [] for the 10 other pictures to create histograms and then an if statement, but I'm not sure how to code it.
Does anyone have a tip on how to proceed or where to look? I couldn't find a similar post.
Thanks in advance and sorry if the question is very basic!
size(600, 400);
// Load an image from the data directory
// Load a different image by modifying the comments
PImage img = loadImage("image4.jpg");
image(img, 0, 0);
int[] hist = new int[256];
// Calculate the histogram
for (int i = 0; i < img.width; i++) {
for (int j = 0; j < img.height; j++) {
int bright = int(brightness(get(i, j)));
hist[bright]++;
}
}
// Find the largest value in the histogram
int histMax = max(hist);
stroke(255);
// Draw half of the histogram (skip every second value)
for (int i = 0; i < img.width; i += 2) {
// Map i (from 0..img.width) to a location in the histogram (0..255)
int which = int(map(i, 0, img.width, 0, 255));
// Convert the histogram value to a location between
// the bottom and the top of the picture
int y = int(map(hist[which], 0, histMax, img.height, 0));
line(i, img.height, i, y);
}
Not sure if your problem is the implementation in processing or if you don't know how to compare histograms. I assume it is the comparison as the rest is pretty straight forward. Calculate the similarity for every candidate and pick the winner.
Search the web for histogram comparison and among others you will find:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
OpenCV implements four measures for histogram similarity.
Correlation
where and N is the number of histogram bins
or
Chi-Square
or
Intersection
or
Bhattacharyya-Distance
You can use these measures, but I'm sure you'll find something else as well.

How to remove a "green screen" portrait background

I'm looking for a way to automatically remove (=make transparent) a "green screen" portrait background from a lot of pictures.
My own attempts this far have been... ehum... less successful.
I'm looking around for any hints or solutions or papers on the subject. Commercial solutions are just fine, too.
And before you comment and say that it is impossible to do this automatically: no it isn't. There actually exists a company which offers exactly this service, and if I fail to come up with a different solution we're going to use them. The problem is that they guard their algorithm with their lives, and therefore won't sell/license their software. Instead we have to FTP all pictures to them where the processing is done and then we FTP the result back home. (And no, they don't have an underpaid staff hidden away in the Philippines which handles this manually, since we're talking several thousand pictures a day...) However, this approach limits its usefulness for several reasons. So I'd really like a solution where this could be done instantly while being offline from the internet.
EDIT: My "portraits" depictures persons, which do have hair - which is a really tricky part since the green background will bleed into hair. Another tricky part is if it is possible to distingush between the green in the background and the same green in peoples clothes. The company I'm talking about above claims that they can do it by figuring out if the green area are in focus (being sharp vs blurred).
Since you didn't provide any image, I selected one from the web having a chroma key with different shades of green and a significant amount of noise due to JPEG compression.
There is no technology specification so I used Java and Marvin Framework.
input image:
The step 1 simply converts green pixels to transparency. Basically it uses a filtering rule in the HSV color space.
As you mentioned, the hair and some boundary pixels presents colors mixed with green. To reduce this problem, in the step 2, these pixels are filtered and balanced to reduce its green proportion.
before:
after:
Finally, in the step 3, a gradient transparency is applied to all boundary pixels. The result will be even better with high quality images.
final output:
Source code:
import static marvin.MarvinPluginCollection.*;
public class ChromaToTransparency {
public ChromaToTransparency(){
MarvinImage image = MarvinImageIO.loadImage("./res/person_chroma.jpg");
MarvinImage imageOut = new MarvinImage(image.getWidth(), image.getHeight());
// 1. Convert green to transparency
greenToTransparency(image, imageOut);
MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out1.png");
// 2. Reduce remaining green pixels
reduceGreen(imageOut);
MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out2.png");
// 3. Apply alpha to the boundary
alphaBoundary(imageOut, 6);
MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out3.png");
}
private void greenToTransparency(MarvinImage imageIn, MarvinImage imageOut){
for(int y=0; y<imageIn.getHeight(); y++){
for(int x=0; x<imageIn.getWidth(); x++){
int color = imageIn.getIntColor(x, y);
int r = imageIn.getIntComponent0(x, y);
int g = imageIn.getIntComponent1(x, y);
int b = imageIn.getIntComponent2(x, y);
double[] hsv = MarvinColorModelConverter.rgbToHsv(new int[]{color});
if(hsv[0] >= 60 && hsv[0] <= 130 && hsv[1] >= 0.4 && hsv[2] >= 0.3){
imageOut.setIntColor(x, y, 0, 127, 127, 127);
}
else{
imageOut.setIntColor(x, y, color);
}
}
}
}
private void reduceGreen(MarvinImage image){
for(int y=0; y<image.getHeight(); y++){
for(int x=0; x<image.getWidth(); x++){
int r = image.getIntComponent0(x, y);
int g = image.getIntComponent1(x, y);
int b = image.getIntComponent2(x, y);
int color = image.getIntColor(x, y);
double[] hsv = MarvinColorModelConverter.rgbToHsv(new int[]{color});
if(hsv[0] >= 60 && hsv[0] <= 130 && hsv[1] >= 0.15 && hsv[2] > 0.15){
if((r*b) !=0 && (g*g) / (r*b) >= 1.5){
image.setIntColor(x, y, 255, (int)(r*1.4), (int)g, (int)(b*1.4));
} else{
image.setIntColor(x, y, 255, (int)(r*1.2), g, (int)(b*1.2));
}
}
}
}
}
public static void main(String[] args) {
new ChromaToTransparency();
}
}
Take a look at this thread:
http://www.wizards-toolkit.org/discourse-server/viewtopic.php?f=2&t=14394&start=0
and the link within it to the tutorial at:
http://tech.natemurray.com/2007/12/convert-white-to-transparent.html
Then it's just a matter of writing some scripts to look through the directory full of images. Pretty simple.
If you know the "green color" you may write a small program in opencv C/C++/Python to do extract that color and replace with transparent pixels.
123 Video Magic Green Screen Background Software and there are a few more just made to remove green screen background hope this helps
PaintShop Pro allows you to remove backgrounds based on picking a color. They also have a Remove Background wand that will remove whatever you touch (converting those pixels to transparent). You can tweak the "tolerance" for the wand, such that it takes out pixels that are similar to the ones you are touching. This has worked pretty well for me in the past.
To automate it, you'd program a script in PSP that does what you want and then call it from your program. This might be a kludgy way to to do automatic replacement, but it would be the cheapest, fastest solution without having to write a bunch of C#/C++ imaging code or pay a commercial agency.
They being said, you pay for what you get.

Extracting Dominant / Most Used Colors from an Image

I would like to extract the most used colors inside an image, or at least the primary tones
Could you recommend me how can I start with this task? or point me to a similar code? I have being looking for it but no success.
You can get very good results using an Octree Color Quantization algorithm. Other quantization algorithms can be found on Wikipedia.
I agree with the comments - a programming solution would definitely need more information. But till then, assuming you'll obtain the RGB values of each pixel in your image, you should consider the HSV colorspace where the Hue can be said to represent the "tone" of each pixel. You can then use a histogram to identify the most used tones in your image.
Well, I assume you can access to each pixel RGB color. There are two ways you can so depending on how you want it.
First you may simply create some of all pixel's R, G and B. Like this.
A pseudo code.
int Red = 0;
int Green = 0;
int Blue = 0;
foreach (Pixels as aPixel) {
Red += aPixel.getRed();
Green += aPixel.getGreen();
Blue += aPixel.getBlue();
}
Then see which is more.
This give you only the picture is more red, green or blue.
Another way will give you static of combined color too (like orange) by simply create histogram of each RGB combination.
A pseudo code.
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = aPixel.getRGB();
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then see which one has more count.
You may also reduce the color-resolution as a regular RGB coloring will give you up to 6.7 million colors.
This can be done easily by given the RGB to ranges of color. For example, let say, RGB is 8 step not 256.
A pseudo code.
function Reduce(Color) {
return (Color/32)*32; // 32 is 256/8 as for 8 ranges.
}
function ReduceRGB(RGB) {
return new RGB(Reduce(RGB.getRed()),Reduce(RGB.getGreen() Reduce(RGB.getBlue()));
}
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = ReduceRGB(aPixel.getRGB());
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then you can see which range have the most count.
I hope these technique makes sense to you.

Resources