I'm looking for a way to automatically remove (=make transparent) a "green screen" portrait background from a lot of pictures.
My own attempts this far have been... ehum... less successful.
I'm looking around for any hints or solutions or papers on the subject. Commercial solutions are just fine, too.
And before you comment and say that it is impossible to do this automatically: no it isn't. There actually exists a company which offers exactly this service, and if I fail to come up with a different solution we're going to use them. The problem is that they guard their algorithm with their lives, and therefore won't sell/license their software. Instead we have to FTP all pictures to them where the processing is done and then we FTP the result back home. (And no, they don't have an underpaid staff hidden away in the Philippines which handles this manually, since we're talking several thousand pictures a day...) However, this approach limits its usefulness for several reasons. So I'd really like a solution where this could be done instantly while being offline from the internet.
EDIT: My "portraits" depictures persons, which do have hair - which is a really tricky part since the green background will bleed into hair. Another tricky part is if it is possible to distingush between the green in the background and the same green in peoples clothes. The company I'm talking about above claims that they can do it by figuring out if the green area are in focus (being sharp vs blurred).
Since you didn't provide any image, I selected one from the web having a chroma key with different shades of green and a significant amount of noise due to JPEG compression.
There is no technology specification so I used Java and Marvin Framework.
input image:
The step 1 simply converts green pixels to transparency. Basically it uses a filtering rule in the HSV color space.
As you mentioned, the hair and some boundary pixels presents colors mixed with green. To reduce this problem, in the step 2, these pixels are filtered and balanced to reduce its green proportion.
before:
after:
Finally, in the step 3, a gradient transparency is applied to all boundary pixels. The result will be even better with high quality images.
final output:
Source code:
import static marvin.MarvinPluginCollection.*;
public class ChromaToTransparency {
public ChromaToTransparency(){
MarvinImage image = MarvinImageIO.loadImage("./res/person_chroma.jpg");
MarvinImage imageOut = new MarvinImage(image.getWidth(), image.getHeight());
// 1. Convert green to transparency
greenToTransparency(image, imageOut);
MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out1.png");
// 2. Reduce remaining green pixels
reduceGreen(imageOut);
MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out2.png");
// 3. Apply alpha to the boundary
alphaBoundary(imageOut, 6);
MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out3.png");
}
private void greenToTransparency(MarvinImage imageIn, MarvinImage imageOut){
for(int y=0; y<imageIn.getHeight(); y++){
for(int x=0; x<imageIn.getWidth(); x++){
int color = imageIn.getIntColor(x, y);
int r = imageIn.getIntComponent0(x, y);
int g = imageIn.getIntComponent1(x, y);
int b = imageIn.getIntComponent2(x, y);
double[] hsv = MarvinColorModelConverter.rgbToHsv(new int[]{color});
if(hsv[0] >= 60 && hsv[0] <= 130 && hsv[1] >= 0.4 && hsv[2] >= 0.3){
imageOut.setIntColor(x, y, 0, 127, 127, 127);
}
else{
imageOut.setIntColor(x, y, color);
}
}
}
}
private void reduceGreen(MarvinImage image){
for(int y=0; y<image.getHeight(); y++){
for(int x=0; x<image.getWidth(); x++){
int r = image.getIntComponent0(x, y);
int g = image.getIntComponent1(x, y);
int b = image.getIntComponent2(x, y);
int color = image.getIntColor(x, y);
double[] hsv = MarvinColorModelConverter.rgbToHsv(new int[]{color});
if(hsv[0] >= 60 && hsv[0] <= 130 && hsv[1] >= 0.15 && hsv[2] > 0.15){
if((r*b) !=0 && (g*g) / (r*b) >= 1.5){
image.setIntColor(x, y, 255, (int)(r*1.4), (int)g, (int)(b*1.4));
} else{
image.setIntColor(x, y, 255, (int)(r*1.2), g, (int)(b*1.2));
}
}
}
}
}
public static void main(String[] args) {
new ChromaToTransparency();
}
}
Take a look at this thread:
http://www.wizards-toolkit.org/discourse-server/viewtopic.php?f=2&t=14394&start=0
and the link within it to the tutorial at:
http://tech.natemurray.com/2007/12/convert-white-to-transparent.html
Then it's just a matter of writing some scripts to look through the directory full of images. Pretty simple.
If you know the "green color" you may write a small program in opencv C/C++/Python to do extract that color and replace with transparent pixels.
123 Video Magic Green Screen Background Software and there are a few more just made to remove green screen background hope this helps
PaintShop Pro allows you to remove backgrounds based on picking a color. They also have a Remove Background wand that will remove whatever you touch (converting those pixels to transparent). You can tweak the "tolerance" for the wand, such that it takes out pixels that are similar to the ones you are touching. This has worked pretty well for me in the past.
To automate it, you'd program a script in PSP that does what you want and then call it from your program. This might be a kludgy way to to do automatic replacement, but it would be the cheapest, fastest solution without having to write a bunch of C#/C++ imaging code or pay a commercial agency.
They being said, you pay for what you get.
Related
So, here is my situation. I have created a object detection program which is based on color object detection. My program detects the color red and it works perfectly. But here is the problems i am facing:-
Whenever there are more than one red object in the surrounding, my program detects them and it cannot really track one object at that time(i.e it tracks other red objects of various sizes in the background. It shows me the error that "too much noise in the background". As you can see in the "threshold image" attached, it detects the round object (which is my tracking object) and my cap which is red in color. I want my program to detect only my tracking object("which is a round shaped coke cap"). How can i achieve that? Please help me out. I have my engineering design contest in few days and i have to demo my program infront of my lecturers. My program should only be able to detect and track the object which i want. Thanks
My code for the objectdetection program is a little long. So, i am hereby explaining the code as follows- I captured a frame from the webcam frame-converted it to HSV- used HSV Inrange filter to filter out the other colors but red- applied morphological operations on the filtered image. This all goes in my main function
I am using a frame resolution of 1280*720 for my webcam frame. It kind of slows down my program but it was a trade off which i had to do for performing gesture controlled operations. Anyways here is my drawobjectfunction and trackfilteredobjectfunction.
int H_MIN = 0;
int H_MAX = 256;
int S_MIN = 0;
int S_MAX = 256;
int V_MIN = 0;
int V_MAX = 256;
//default capture width and height
const int FRAME_WIDTH = 1280;
const int FRAME_HEIGHT = 720;
//max number of objects to be detected in frame
const int MAX_NUM_OBJECTS=50;
//minimum and maximum object area
const int MIN_OBJECT_AREA = 20*20;
const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;
void drawObject(int x, int y,Mat &frame){
circle(frame,Point(x,y),20,Scalar(0,255,0),2);
if(y-25>0)
line(frame,Point(x,y),Point(x,y-25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,0),Scalar(0,255,0),2);
if(y+25<FRAME_HEIGHT)
line(frame,Point(x,y),Point(x,y+25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,FRAME_HEIGHT),Scalar(0,255,0),2);
if(x-25>0)
line(frame,Point(x,y),Point(x-25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(0,y),Scalar(0,255,0),2);
if(x+25<FRAME_WIDTH)
line(frame,Point(x,y),Point(x+25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(FRAME_WIDTH,y),Scalar(0,255,0),2);
putText(frame,intToString(x)+","+intToString(y),Point(x,y+30),1,1,Scalar(0,255,0),2);
}
void trackFilteredObject(int &x, int &y, Mat threshold, Mat &cameraFeed){
Mat temp;
threshold.copyTo(temp);
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
//use moments method to find our filtered object
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0) {
int numObjects = hierarchy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(numObjects<MAX_NUM_OBJECTS){
for (int index = 0; index >= 0; index = hierarchy[index][0]) {
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area>MIN_OBJECT_AREA && area<MAX_OBJECT_AREA && area>refArea){
x = moment.m10/area;
y = moment.m01/area;
objectFound = true;
refArea = area;
}else objectFound = false;
}
//let user know you found an object
if(objectFound ==true){
putText(cameraFeed,"Tracking Object",Point(0,50),2,1,Scalar(0,255,0),2);
//draw object location on screen
drawObject(x,y,cameraFeed);}
}else putText(cameraFeed,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
}
}
Here is the link of the image; as you can see it also detects the red hat in the background along with the red cap of the coke bottle.
My observations:- Here is what i think, to achieve my desired goal of not detecting objects of unknown sizes of red color. I think i have to edit the value of maximum object area which i declared in the above program as (const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;). I think i have to change this value, that might eliminate the detection of bigger continous red pictures. But also, there is another problem some objects are not completely red in color and they have patches of red and other colors. So, if the detected area is within the range specfied in my program then my program detects those red patches too. What i mean to say is i was wearing a tshirt which has mixed colors and when i tested my program by wearing that tshirt, my program was able to detect the red color out of the other colors. Now, how do i solve this issue?
I think you can try out the following procedure:
obtain a circular kernel having roughly the same area as your object of interest. You can do it like: Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(d, d));
where d is the diameter of the disk.
perform normalized-cross-correlation or convolution of the filtered regions image with this kernel (I think normalized-cross-correlation would be better. And add an empty boarder around the kernel).
the peak of the resulting image should give you the location of the circular region in your filtered image (if you are using normalized-cross-correlation, you'll have to add the shift).
To speed things up, you can perform this at a reduced resolution.
You can filter out non-circular shapes by detecting circles in your thresholded image. OpenCV provides a built-on method to detect circles using Hough transform, more info here. You can take advantage of this function to retain only circles that have a radius in a given range.
Another possibility is to implement connected component labeling (CCL) into your demo program.
I believe that it was removed at some point in verions 2.x of OpenCV, but a basic implementation of the two-pass version is straightforward from the Wikipedia page.
CCL will assign a unique ID for each object after thresholding. You then have to implement matching between the objects at frame (T-1) and objects in frame (T) (for example based on some nearest distance criterion) and possibly trajectory filtering or smoothing, but this would definitely give you some extra-points.
I am a newbie to openCV. I have installed the opencv library on a ubuntu system, compiled it and trying to look into some image/video processing apps in opencv to understand more.
I am interested to know if OpenCV library has any algorithm/class for removal flicker in captured videos? If yes what document or code should I should look deeper into?
If openCV does not have it, are there any standard implementations in some other Video processing library/SDK/Matlab,.. which provide algorithms for flicker removal from video sequences?
Any pointers would be useful
Thank you.
-AD.
I don't know any standard way to deflicker a video.
But VirtualDub is a Video Processing software which has a Filter for deflickering the video. You can find it's filter source and documents (algorithm description probably) here.
I wrote my own Deflicker C++ function. here it is. You can cut and paste this code as is - no headers needed other than the usual openCV ones.
Mat deflicker(Mat,int);
Mat prevdeflicker;
Mat deflicker(Mat Mat1,int strengthcutoff = 20){ //deflicker - compares each pixel of the frame to a previously stored frame, and throttle small changes in pixels (flicker)
if (prevdeflicker.rows){//check if we stored a previous frame of this name.//if not, theres nothing we can do. clone and exit
int i,j;
uchar* p;
uchar* prevp;
for( i = 0; i < Mat1.rows; ++i)
{
p = Mat1.ptr<uchar>(i);
prevp = prevdeflicker.ptr<uchar>(i);
for ( j = 0; j < Mat1.cols; ++j){
Scalar previntensity = prevp[j];
Scalar intensity = p[j];
int strength = abs(intensity.val[0] - previntensity.val[0]);
if(strength < strengthcutoff){ //the strength of the stimulus must be greater than a certain point, else we do not want to allow the change
//value 25 works good for medium+ light. anything higher creates too much blur around moving objects.
//in low light however this makes it worse, since low light seems to increase contrasts in flicker - some flickers go from 0 to 255 and back. :(
//I need to write a way to track large group movements vs small pixels, and only filter out the small pixel stuff. maybe blur first?
if(intensity.val[0] > previntensity.val[0]){ // use the previous frames value. Change it by +1 - slow enough to not be noticable flicker
p[j] = previntensity.val[0] + 1;
}else{
p[j] = previntensity.val[0] - 1;
}
}
}
}//end for
}
prevdeflicker = Mat1.clone();//clone the current one as the old one.
return Mat1;
}
Call it as: Mat= deflicker(Mat). It needs a loop, and a greyscale image, like so:
for(;;){
cap >> frame; // get a new frame from camera
cvtColor( frame, src_grey, CV_RGB2GRAY ); //convert to greyscale - simplifies everything
src_grey = deflicker(src_grey); // this is the function call
imshow("grey video", src_grey);
if(waitKey(30) >= 0) break;
}
I'm currently working on a XNA game prototype. I'm trying to achieve a isometric view of the game world (or is it othographic?? I'm not sure which is the right term for this projection - see pictures).
The world should a tile-based world made of cubic tiles (e.g. similar to Minecraft's world), and I'm trying to render it in 2D by using sprites.
So I have a sprite sheet with the top face of the cube, the front face and the side (visible side) face. I draw the tiles using 3 separate calls to drawSprite, one for the top, one for the side, one for the front, using a source rectangle to pick the face I want to draw and a destination rectangle to set the position on the screen according to a formula to convert from 3D world coordinates to isometric (orthographic?).
(sample sprite:
)
This works good as long as I draw the faces, but if I try to draw fine edges of each block (as per a tile grid) I can see that I get a random rendering pattern in which some lines are overwritten by the face itself and some are not.
Please note that for my world representation, X is left to right, Y is inside screen to outside screen, and Z is up to down.
In this example I'm working only with top face-edges. Here is what I get (picture):
I don't understand why some of the lines are shown and some are not.
The rendering code I use is (note in this example I'm only drawing the topmost layers in each dimension):
/// <summary>
/// Draws the world
/// </summary>
/// <param name="spriteBatch"></param>
public void draw(SpriteBatch spriteBatch)
{
Texture2D tex = null;
// DRAW TILES
for (int z = numBlocks - 1; z >= 0; z--)
{
for (int y = 0; y < numBlocks; y++)
{
for (int x = numBlocks - 1; x >=0 ; x--)
{
myTextures.TryGetValue(myBlockManager.getBlockAt(x, y, z), out tex);
if (tex != null)
{
// TOP FACE
if (z == 0)
{
drawTop(spriteBatch, x, y, z, tex);
drawTop(spriteBatch, x, y, z, outlineTexture);
}
// FRONT FACE
if(y == numBlocks -1)
drawFront(spriteBatch, x, y, z, tex);
// SIDE FACE
if(x == 0)
drawSide(spriteBatch, x, y, z, tex);
}
}
}
}
}
private void drawTop(SpriteBatch spriteBatch, int x, int y, int z, Texture2D tex)
{
int pX = OffsetX + (int)(x * TEXTURE_TOP_X_OFFRIGHT + y * TEXTURE_SIDE_X);
int pY = OffsetY + (int)(y * TEXTURE_TOP_Y + z * TEXTURE_FRONT_Y);
topDestRect.X = pX;
topDestRect.Y = pY;
spriteBatch.Draw(tex, topDestRect, TEXTURE_TOP_RECT, Color.White);
}
I tried using a different approach, creating a second 3-tiers nested for loop after the first one, so I keep the top face drawing in the first loop and the edge highlight in the second loop (I know, this is inefficient, I should also probably avoid having a method call for each tile to draw it, but I'm just trying to get it working for now).
The results are somehow better but still not working as expected, top rows are missing, see picture:
Any idea of why I'm having this problem? In the first approach it might be a sort of z-fighting, but I'm drawing sprites in a precise order so shouldn't they overwrite what's already there?
Thanks everyone
Whoa, sorry guys I'm an idiot :) I started the batch with SpriteBatch.begin(SpriteSortMode.BackToFront) but I didn't use any z-value in the draw.
I should have used SpriteSortMode.Deferred! It's now working fine. Thanks everyone!
Try tweaking the sizes of your source and destination rectangles by 1 or 2 pixels. I have a sneaking suspicion this has something to do with the way these rectangles are handled as sort of 'outlines' of the area to be rendered and a sort of off-by-one problem. This is not expert advice, just a fellow coder's intuition.
Looks like a sub pixel precision or scaling issue. Also try to ensure your texture/tile width/height is a power of 2 (32, 64, 128, etc.) as that could make the effect less bad as well. It's really hard to tell just from those pictures.
I don't know how/if you scale everything, but you should try to avoid rounding wherever possible (especially inside your drawTop() method). Every time you round some position/coordinate chances are good you might increase the error/random offsets. Try to use double (or better: float) coordinates instead of integer.
I would like to extract the most used colors inside an image, or at least the primary tones
Could you recommend me how can I start with this task? or point me to a similar code? I have being looking for it but no success.
You can get very good results using an Octree Color Quantization algorithm. Other quantization algorithms can be found on Wikipedia.
I agree with the comments - a programming solution would definitely need more information. But till then, assuming you'll obtain the RGB values of each pixel in your image, you should consider the HSV colorspace where the Hue can be said to represent the "tone" of each pixel. You can then use a histogram to identify the most used tones in your image.
Well, I assume you can access to each pixel RGB color. There are two ways you can so depending on how you want it.
First you may simply create some of all pixel's R, G and B. Like this.
A pseudo code.
int Red = 0;
int Green = 0;
int Blue = 0;
foreach (Pixels as aPixel) {
Red += aPixel.getRed();
Green += aPixel.getGreen();
Blue += aPixel.getBlue();
}
Then see which is more.
This give you only the picture is more red, green or blue.
Another way will give you static of combined color too (like orange) by simply create histogram of each RGB combination.
A pseudo code.
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = aPixel.getRGB();
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then see which one has more count.
You may also reduce the color-resolution as a regular RGB coloring will give you up to 6.7 million colors.
This can be done easily by given the RGB to ranges of color. For example, let say, RGB is 8 step not 256.
A pseudo code.
function Reduce(Color) {
return (Color/32)*32; // 32 is 256/8 as for 8 ranges.
}
function ReduceRGB(RGB) {
return new RGB(Reduce(RGB.getRed()),Reduce(RGB.getGreen() Reduce(RGB.getBlue()));
}
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = ReduceRGB(aPixel.getRGB());
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then you can see which range have the most count.
I hope these technique makes sense to you.
For a hobby project I'm going to build a program that when given an image bitmap will create a cross-stitch pattern as a PDF. I'll be using Cocoa/Objective C on a Mac.
The source bitmap will typically be a 24bpp image, but of the millions of colours available, only a few exist as cross-stitch threads. Threads come in various types. DMC is the most widely available, and almost their entire range is available as RGB values from various web sites. Here's one, for instance.
DMC# Name R G B
----- ------------------ --- --- ---
blanc White 255 255 255
208 Lavender - vy dk 148 91 128
209 Lavender - dk 206 148 186
210 Lavender - md 236 207 225
211 Lavender - lt 243 218 228
...etc...
My first problem, as I see it, is from a starting point of the RGB from a pixel in the image choosing the nearest colour available from the DMC set. What's the best way of finding the nearest DMC colour mathematically, and ensuring that it's a close fit as a colour too?
Although I'll be using Cocoa, feel free to use pseudo-code (or even Java!) in any code you post.
Use the LAB color space and find the color with the nearest euclidean distance. Doing this in the RGB color space will yield counter-intuitive results. (Or use the HSL color space.)
So just iterate over each pixel and find the color with the closest distance within the color space you choose. Note that the distance must be computed circularly for some color spaces (e.g. those employing hue).
(Most color quanization revolves around actually choosing a palette, but that has already been taken care of in your case, so you can't use the more popular quantization techniques.)
Also, check out this question.
To find the HSB hue in Cocoa, it looks like you can use the getHue method declared in NSColor.h.
However, if you just convert an image to a cross-stitch design using this technique, it will be very hard to actually stitch it. It will be full of single-pixel color fields, which sort of defeats the purpose of cross-stitching.
This is called color quantization, and there are many algorithms available.
One very basic is to just treat RGB colors as points in space, and use plain old Euclidian distance between colors to figure out how "close" they are. This has drawbacks, since human eyes have different sensitivity at different places in this space, so such a distance would not correspond well to how humans perceive the colors. You can use various weighting schemes to improve that situation.
Interresting... :)
You would not only identify the nearest colors, you would also want to reduce the number of colors used. You don't want to end up with a stitching pattern that uses hundreds of different colors...
I put together some code that does this on a basic level. (Sorry that it's in C#, I hope that it can be somewhat useful anyway.)
There is some further tweaking that needs to be done before the method works well, of course. The GetDistance method weights the importance of hue, saturation and brightness against each other, finding the best balance between those is of course important in order to find the color that looks closest.
There is also a lot that can be done with the method of reducing the palette. In the example I just picked the most used colors, but you probably want to weight in how alike the colors are in the palette. This can be done by picking the most used color, reduce the count for the remaining colors in the list depending on the distance to the picked color, and then resort the list.
The Hsl class that holds a DMC color, can calculate the distance to another color, and find the nearest color in a list of colors:
public class Hsl {
public string DmcNumber { get; private set; }
public Color Color { get; private set; }
public float Hue { get; private set; }
public float Saturation { get; private set; }
public float Brightness { get; private set; }
public int Count { get; set; }
public Hsl(Color c) {
DmcNumber = "unknown";
Color = c;
Hue = c.GetHue();
Saturation = c.GetSaturation();
Brightness = c.GetBrightness();
Count = 0;
}
public Hsl(string dmc, int r, int g, int b)
: this(Color.FromArgb(r, g, b))
{
DmcNumber = dmc;
}
private static float AngleDifference(float a1, float a2) {
float a = Math.Abs(a1 - a2);
if (a > 180f) {
a = 360f - a;
}
return a / 180f;
}
public float GetDistance(Hsl other) {
return
AngleDifference(Hue, other.Hue) * 3.0f +
Math.Abs(Saturation - other.Saturation) +
Math.Abs(Brightness - other.Brightness) * 4.0f;
}
public Hsl GetNearest(IEnumerable<Hsl> dmcColors) {
Hsl nearest = null;
float nearestDistance = float.MaxValue;
foreach (Hsl dmc in dmcColors) {
float distance = GetDistance(dmc);
if (distance < nearestDistance) {
nearestDistance = distance;
nearest = dmc;
}
}
return nearest;
}
}
This code sets up a (heavily reduced) list of DMC colors, loads an image, counts the colors, reduces the palette and converts the image. You would of course also want to save the information from the reduced palette somewhere.
Hsl[] dmcColors = {
new Hsl("blanc", 255, 255, 255),
new Hsl("310", 0, 0, 0),
new Hsl("317", 167, 139, 136),
new Hsl("318", 197, 198, 190),
new Hsl("322", 81, 109, 135),
new Hsl("336", 36, 73, 103),
new Hsl("413", 109, 95, 95),
new Hsl("414", 167, 139, 136),
new Hsl("415", 221, 221, 218),
new Hsl("451", 179, 151, 143),
new Hsl("452", 210, 185, 175),
new Hsl("453", 235, 207, 185),
new Hsl("503", 195, 206, 183),
new Hsl("504", 206, 221, 193),
new Hsl("535", 85, 85, 89)
};
Bitmap image = (Bitmap)Image.FromFile(#"d:\temp\pattern.jpg");
// count colors used
List<Hsl> usage = new List<Hsl>();
for (int y = 0; y < image.Height; y++) {
for (int x = 0; x < image.Width; x++) {
Hsl color = new Hsl(image.GetPixel(x, y));
Hsl nearest = color.GetNearest(dmcColors);
int index = usage.FindIndex(h => h.Color.Equals(nearest.Color));
if (index != -1) {
usage[index].Count++;
} else {
nearest.Count = 1;
usage.Add(nearest);
}
}
}
// reduce number of colors by picking the most used
Hsl[] reduced = usage.OrderBy(c => -c.Count).Take(5).ToArray();
// convert image
for (int y = 0; y < image.Height; y++) {
for (int x = 0; x < image.Width; x++) {
Hsl color = new Hsl(image.GetPixel(x, y));
Hsl nearest = color.GetNearest(reduced);
image.SetPixel(x, y, nearest.Color);
}
}
image.Save(#"d:\temp\pattern.png", System.Drawing.Imaging.ImageFormat.Png);
get the source for the ppmquant application from the netpbm set of utilities
Others have pointed out various techniques for color quantization. It's possible to use techniques like Markov Random Fields to try to penalize the system for switching thread colors at neighboring pixel locations. There are some generic multi-label MRF libraries out there including Boykov's.
To use one of these, the data elements would be the input colors, the labels would be the set of thread colors, the data terms could be something like the Euclidean distance in LAB space suggested by bzlm, and the neighborhood terms would penalize for switching thread colors.
Depending on the relevance of the correctness of your color operations, remember to take color spaces into account. While I have studied this somewhat, due to my photography hobby, I'm still a bit confused about everything.
But, as previously mentioned, use LAB as much as possible, because (afaik) it's color space agnostic, while all other methods (RGB/HSL/CMYK) mean nothing (in theory) without a defined color space.
RGB, for example, is just three percentage values (0-255 => 0-100%, with 8-bit color depth). So, if you have an RGB-triplet of (0,255,0), it translates to "only green, and as much of it as possible". So, the question is "how red is red?". This is the question that a color space answers - sRGB 100%-green is not as green as AdobeRGB 100%-green. It's not even the same hue!
Sorry if this went to the offtopic side of things