Making a "2048 solitaire game" with F#: Creating a fsi file, fs file and fsx file - f#

In your solution, you are to represent a board with its pieces as a list of pieces, where each piece
has a color and a position. This is captured by the following type abbreviations:
type pos = int * int // A 2-dimensional vector in board - coordinates (not pixels)
type value = Red | Green | Blue | Yellow | Black // piece values
type piece = value * pos //
type state = piece list // the board is a set of randomly organized
pieces
In the following, the first coordinate in pos will thought of as a up-down axis also called the row, and
the second as an left-right axis also called the column with (0,0) being the top-left.
Make a library consisting of a signature and an implementation file. The library must contain
the following functions:
// convert a 2048 - value v to a canvas color E.g. ,
// > fromValue Green ;;
// val it: color = { r = 0uy
// g = 255 uy
// b = 0uy
// a = 255 uy }
val fromValue: v: value -> Canvas.color
// give the 2048 - value which is the next in order from c, e.g. ,
// > nextColor Blue ;;
// val it: value = Yellow
// > nextColor Black ;;
// val it: value = Black
val nextColor: c: value -> value
// return the list of pieces on a column k on board s, e.g. ,
// > filter 0 [( Blue , (1 , 0)); (Red , (0 , 0))];;
// val it: state = [( Blue , (1 , 0)); (Red , (0 , 0))]
// > filter 1 [( Blue , (1 , 0)); (Red , (0 , 0))];;
// val it: state = []
val filter: k: int -> s: state -> state
// tilt all pieces on the board s to the left ( towards zero on
// the first coordinate ), e.g. ,
// > shiftUp [( Blue , (1 , 0)); (Red , (2 , 0)); (Black , (1 ,1))];;
// val it: state = [( Blue , (0 , 0)); (Red , (1 , 0)); (Black , (0 ,))]
val shiftUp: s: state -> state
// flip the board s such that all pieces position change as
// (i,j) -> (2 -i,j), e.g.
// > flipUD [( Blue , (1 , 0)); (Red , (2 , 0))];;
// val it: state = [( Blue , (1 , 0)); (Red , (0 , 0))]
val flipUD: s: state -> state
// transpose the pieces on the board s such all piece positiosn
// change as (i,j) -> (j,i), e.g. ,
// > transpose [( Blue , (1 , 0)); (Red , (2 , 0))];;
// val it: state = [( Blue , (0 , 1)); (Red , (0 , 2))]
val transpose: s: state -> state
// find the list of empty positions on the board s, e.g. ,
// > empty [( Blue , (1 , 0)); (Red , (2 , 0))];;
// val it: pos list = [(0 , 0); (0 , 1); (0 , 2); (1 , 1); (1 , 2); (2 , 1); (2 , 2)]
val empty: s: state -> pos list
// randomly place a new piece of color c on an empty position on
// the board s, e.g. ,
// > addRandom Red [( Blue , (1 , 0)); (Red , (2 , 0))];;
// val it: state option = Some [( Red , (0 , 2)); (Blue , (1 , 0)); (Red , (2 , 0))]
val addRandom: c: value -> s: state -> state option
My issue is trying to implement the first function "fromValue" in my fs-file
I have loaded the Canvas package to the project and ran it through the terminal (dotnet add package DIKU.Canvas) successfully:
#r "nuget:diku.canvas, 1.0.1"
This is what I have tried:
module Canvas
type pos = int*int // A 2 - dimensional vector in board - coordinats (not pixels)
type value = Red | Green | Blue | Yellow | Black // piece values
type piece = value*pos //
type state = piece list // the board is a set of randomly organized pieces
let fromValue (v: value) : Canvas.color =
let Red = Canvas.red
let Green = Canvas.green
let Blue = Canvas.blue
let Yellow = Canvas.yellow
let Black = Canvas.black

As Brain says, you could use pattern matching here.
Use this as a starting point:
let fromValue (v: value) : Canvas.color =
match v with
| Red -> Canvas.red
| Green -> Canvas.green
| etc ..

Related

How to convert colors?

I'd like to do some kind of special color comparison.
During my research I found out that the comparison should not be done using RGB spectrum because some different spectres like HSL & HSV are designed to "more closely align with the way human vision perceives color-making attributes" (quote wikipedia).
So I need a way to convert different colorSystems into each other.
One of the most important conversion for my purposes would be to convert HEX to HSL (using Swift)
Because I'm a bloody beginner this code is all that I've got so far:
// conversion HEX to HSL
HexToHSL("#F23CFF") // HSL should be "HSL: 296° 100% 62%"
func HexToHSL(_ hex: String) {
let rgb = HexToRgb(hex)
let r = rgb[0],
g = rgb[1],
b = rgb[2],
a = rgb[3]
}
func RgbToHSL(r: Int, g: Int, b: Int) -> [Int] {
let r = r/255, g = g/255, b = b/255;
let max = [r, g, b].max()!, min = [r, g, b].min()!;
let (h, s, l) = Double(max + min)*0.5; // "Expression type 'Double' is ambiguous without more context"
if (max == min) {
h = s = 0;
} else {
let d = max - min;
s = l > 0.5 ? d / (2 - max - min) : d / (max + min);
h /= 6;
}
return [ h, s, l ];
}
func HexToRgb(_ hex: String) -> [Int] {
let hex = hex.substring(fromIndex: 1)
var rgbValue:UInt32 = 0
Scanner(string: hex).scanHexInt32(&rgbValue)
let red = Int((rgbValue & 0xFF0000) >> 16),
green = Int((rgbValue & 0x00FF00) >> 8),
blue = Int(rgbValue & 0x0000FF),
alpha = Int(255.0)
return [red, green, blue, alpha]
}
Any help how to fix the color conversion from HEX to HSL would be very appreciated, thanks in advance!
Note: Theres also a javascript sample for some kind of color conversion. Maybe it's helpful :)
Edit: I have fixed the code for rgb to hsl like this:
func RgbToHSL(_ rgb: [Int]) -> [Double] {
let r = Double(rgb[0])/255, g = Double(rgb[1])/255, b = Double(rgb[2])/255;
let max = [r, g, b].max()!, min = [r, g, b].min()!;
var h = Double(max + min)*0.5,
s = Double(max + min)*0.5,
l = Double(max + min)*0.5;
if (max == min) {
h = 0
s = 0
l = 0
} else {
let d = max - min;
s = l > 0.5 ? d / (2 - max - min) : d / (max + min);
switch (max) {
case r: h = (g - b) / d + (g < b ? 6 : 0); break;
case g: h = (b - r) / d + 2; break;
case b: h = (r - g) / d + 4; break;
default: break;
}
h /= 6;
}
return [ h, s, l ];
}
... but the result for rgb = [242, 60, 255] will be [0.8222222222222223, 1.0, 0.61764705882352944] -- doesn't looks fine because it should be 296° 100% 62%! :o
In order to compare colours, thus perform colour differences you need to use a perceptually uniform colourspace.
HSL and HSV are actually very poor colourspaces to do so, they should not be used for proper colorimetric computations because their Lightness and Value axis are not actual perceptual representation of Luminance contrary to colourspaces such as CIE L*a*b* and CIE L*u*v*.
There are multiple ways to compute colour difference in colour science, usually the simplest and the one assuming you are using a uniform colourspace is euclidean distance.
This is what DeltaE CIE 1976 does using the CIE L*a*b* colourspace. The CIE noticed that some colours with low DeltaE values were actually appearing quite different, this was a side effect of CIE L*a*b* colourspace not being perceptually uniform enough. From there research has produced many new colour difference formulas and new perceptually uniform colourspaces.
Here is a non-exhaustive list from oldest to most recent of notable colour difference formulas and perceptually uniform colourspaces, notice the implementation complexity almost follows the list order:
DeltaE CIE 1976
DeltaE CMC
DeltaE CIE 1994
DIN99
IPT
DeltaE CIE 2000
CIECAM02 & CAM02-UCS
CAM16 & CAM16-UCS
ICTCP
JzAzBz
I would suggest to look at something like ICTCP or JzAzBz which offer good performance and are not super complex to implement or at the very least use CIE L*a*b* with euclidean distance but avoid using HSL and HSV.
We have reference implementations for everything mentioned here in Colour.

How can I get information from circular ROI using dm script?

After making a circular ROI in an image, how can I get the information (average, standar deviation, variance) from that image region using script?
Can I link the position in the ciruclar ROI with original image?
This task is unfortunately not as straight forward and easy as one would hope.
While scripting supports a convenient shortcut to restrict image operations to rectangular ROIs ( using the img[] notation ), there is nothing like that for irregular ROIs.
In such a case, one has to manually create a binary mask of a ROI and perform the wanted operations manually. The example script at the bottom of this post shows how the average value of an irregular ROI may be computed.
CreateImageWithROI() Creates a test image with two ROIs on it
GetFirstIrregularROIOfImage() just returns the first found, irregular ROI of an image
GetROIMean() is the actual example
The command ROIAddToMask() is used to create the mask. Note, that there is also a similar command which would perform the action with all ROIs of an image display at once: ImageDisplayAccumulateROIsToMask()
So far, so good.
However, it turns out that the newly introduced Circular ROIs do not yet support the mask-creation commands correctly (Tested with GMS 3.1).
Instead, they always use the bounding rectangle of the ROI:
It is therefore necessary to go even one step back and read the ROI's coordinates to create a mask from it manually. Get the ROI's bounding-box and create a mask using an icol and irow expression for an ellipse. In the example below:
GetFirstOvalROIOfImage() just returns the first found, oval ROI of an image
MyAddOvalROIToMask() is the manual mask creation for oval ROIs
Example code:
image CreateImageWithROI()
{
// Create and show image
number sx = 256, sy = 256
image img := RealImage( "Image", 4, sx, sy )
img = sin( 0.1 * iradius ) * cos( 7 * itheta )
img.ShowImage()
// Create an irregular, closed ROI
ROI myIrRoi = NewROI()
myIrRoi.ROIAddVertex( 0.3 * sx, 0.1 * sy )
myIrRoi.ROIAddVertex( 0.7 * sx, 0.2 * sy )
myIrRoi.ROIAddVertex( 0.5 * sx, 0.6 * sy )
myIrRoi.ROIAddVertex( 0.1 * sx, 0.8 * sy )
myIrRoi.ROISetIsClosed(1)
myIRRoi.ROISetVolatile(0)
// Create an oval ROI
ROI myOvalROI = NewROI()
myOvalROI.ROISetOval( 0.7 * sy, 0.7 * sx, 0.9 * sy, 0.8 * sx )
myOvalROI.ROISetVolatile(0)
// AddROIs
imageDisplay disp = img.ImageGetImageDisplay( 0 )
disp.ImageDisplayAddROI( myIRRoi )
disp.ImageDisplayAddROI( myOvalROI )
return img
}
ROI GetFirstIrregularROIOfImage( image img )
{
if ( img.ImageIsValid() )
{
if ( 0 != img.ImageCountImageDisplays() )
{
imageDisplay disp = img.ImageGetImageDisplay( 0 )
number nRois = disp.ImageDisplayCountROIs()
for ( number i = 0; i < nRois; i++ )
{
ROI testROI = disp.ImageDisplayGetRoi( i )
number isIrregularClosed = 1
isIrregularClosed *= testROI.ROIIsClosed();
isIrregularClosed *= !testROI.ROIIsOval();
isIrregularClosed *= !testROI.ROIIsRectangle();
isIrregularClosed *= ( 2 < testROI.ROICountVertices());
if ( isIrregularClosed )
return testROI
}
}
}
Throw( "No irregular ROI found" )
}
ROI GetFirstOvalROIOfImage( image img )
{
if ( img.ImageIsValid() )
{
if ( 0 != img.ImageCountImageDisplays() )
{
imageDisplay disp = img.ImageGetImageDisplay( 0 )
number nRois = disp.ImageDisplayCountROIs()
for ( number i = 0; i < nRois; i++ )
{
ROI testROI = disp.ImageDisplayGetRoi( i )
if ( testROI.ROIIsOval() )
return testROI
}
}
}
Throw( "No oval ROI found" )
}
void MyAddOvalROIToMask( image img, ROI ovalROI )
{
number top, left, bottom, right
ovalROI.ROIGetOval( top, left, bottom, right )
number sx = ( right - left )
number sy = ( bottom - top )
number cx = sx/2 // Used as both center x coordiante and x radius!
number cy = sy/2 // Used as both center y coordiante and y radius!
// Create mask of just the rect area
image maskCut := RealImage( "", 4, sx, sy )
maskCut = ( ((cx-icol)/cx)**2 + ((cy-irow)/cy)**2 <= 1 ) ? 1 : 0
// Apply mask to image
img[top, left, bottom, right] = maskCut
}
number GetROIMean( image img, ROI theRoi )
{
if ( !img.ImageIsValid() ) Throw( "Invalid image in GetROIMean()" )
if ( !theRoi.ROIIsValid() ) Throw( "Invalid roi in GetROIMean()" )
// Create a binary mask of "img" size using the ROI's coordinates
image mask = img * 0; // image of same size as "img" with 0 values
number sx, sy
img.GetSize( sx, sy )
// Oval ROIs are not supported by the command correctly
// Hence check and compute mask manually..
if ( theROI.ROIIsOval() )
MyAddOvalROIToMask( mask, theROI )
else
theROI.ROIAddToMask( mask, 0, 0, sx, sy )
if ( TwoButtonDialog( "Show mask?", "Yes", "No" ) )
mask.ShowImage()
// Do meanValue as sums of masked points
number maskedPoints = sum( mask )
number maskedSum
if ( 0 < maskedPoints )
maskedSum = sum( mask * img ) / maskedPoints
else
maskedSum = sum( img )
return maskedSum
}
Result( "\n Testing irregular and oval ROIs on image.\n" )
image testImg := CreateImageWithROI()
ROI testROIir = GetFirstIrregularROIOfImage( testImg )
number ROIirMean = GetROIMean( testImg, testROIir )
Result( "\n Mean value (irregular ROI): "+ ROIirMean )
ROI testROIoval = GetFirstOvalROIOfImage( testImg )
number ROIovalMean = GetROIMean( testImg, testROIoval )
Result( "\n Mean value (oval ROI) : "+ ROIovalMean )

Android bitmap operations using ndk

Currently I'm developing an Android application that involves some image processing. After some research I have made I found that is better to use Android NDK for bitmap manipulation for a good performance. So, I found some basic examples like this one:
static void myFunction(AndroidBitmapInfo* info, void* pixels){
int xx, yy, red, green, blue;
uint32_t* line;
for(yy = 0; yy < info->height; yy++){
line = (uint32_t*)pixels;
for(xx =0; xx < info->width; xx++){
//extract the RGB values from the pixel
blue = (int) ((line[xx] & 0x00FF0000) >> 16);
green = (int)((line[xx] & 0x0000FF00) >> 8);
red = (int) (line[xx] & 0x00000FF );
//change the RGB values
// set the new pixel back in
line[xx] =
((blue << 16) & 0x00FF0000) |
((green << 8) & 0x0000FF00) |
(red & 0x000000FF);
}
pixels = (char*)pixels + info->stride;
}
}
I used this code and it works very well for basic operations, but I want to make a more complex one, like a filter, where I need to access the above and below pixels from the current pixel. To be more specific, I'll give you an example: for dilation and erosion operations we move through pixels and we verify if the pixels from north west, north, north east, west, east, south west, south and south east (for 8 neighbors structure element) are object pixels. What I need to know is how can I access the values of the north and south pixels using the above code.
I'm not very familiarized with image processing using C (pointers etc.).
Thanks!
I've edit your function a little, basically to get the pixel position in the array the formula is:
position = y*width+x
static void myFunction(AndroidBitmapInfo* info, void* pixels){
int xx, yy, red, green, blue;
uint32_t* px = pixels;
for(yy = 0; yy < info->height; yy++){
for(xx =0; xx < info->width; xx++){
int position = yy*info->width+xx;//this formula gives you the address of pixel with coordinates (xx; yy) in 'px'
//extract the RGB values from the pixel
blue = (int) ((line[position] & 0x00FF0000) >> 16);
green = (int)((line[position] & 0x0000FF00) >> 8);
red = (int) (line[position] & 0x00000FF );
//change the RGB values
// set the new pixel back in
line[position] =
((blue << 16) & 0x00FF0000) |
((green << 8) & 0x0000FF00) |
(red & 0x000000FF);
//so the position of the south pixel is (yy+1)*info->width+xx
//and the one of the north is (yy-1)*info->width+xx
//the left yy*info->width+xx-1
//the right yy*info->width+xx+1
}
}
assuming you want to read/edit a pixel with coordinates x,y
you must check if 0 <= y < height and if 0 <= x < width , otherwise you may access unexisting pixels and you will have a memory access error

Binary Image Orientation

I'm trying to find the orientation of a binary image (where orientation is defined to be the axis of least moment of inertia, i.e. least second moment of area). I'm using Dr. Horn's book (MIT) on Robot Vision which can be found here as reference.
Using OpenCV, here is my function, where a, b, and c are the second moments of area as found on page 15 of the pdf above (page 60 of the text):
Point3d findCenterAndOrientation(const Mat& src)
{
Moments m = cv::moments(src, true);
double cen_x = m.m10/m.m00; //Centers are right
double cen_y = m.m01/m.m00;
double a = m.m20-m.m00*cen_x*cen_x;
double b = 2*m.m11-m.m00*(cen_x*cen_x+cen_y*cen_y);
double c = m.m02-m.m00*cen_y*cen_y;
double theta = a==c?0:atan2(b, a-c)/2.0;
return Point3d(cen_x, cen_y, theta);
}
OpenCV calculates the second moments around the origin (0,0) so I have to use the Parallel Axis Theorem to move the axis to the center of the shape, mr^2.
The center looks right when I call
Point3d p = findCenterAndOrientation(src);
rectangle(src, Point(p.x-1,p.y-1), Point(p.x+1, p.y+1), Scalar(0.25), 1);
But when I try to draw the axis with lowest moment of inertia, using this function, it looks completely wrong:
line(src, (Point(p.x,p.y)-Point(100*cos(p.z), 100*sin(p.z))), (Point(p.x, p.y)+Point(100*cos(p.z), 100*sin(p.z))), Scalar(0.5), 1);
Here are some examples of input and output:
(I'd expect it to be vertical)
(I'd expect it to be horizontal)
I worked with the orientation sometimes back and coded the following. It returns me the exact orientation of the object. largest_contour is the shape that is detected.
CvMoments moments1,cenmoments1;
double M00, M01, M10;
cvMoments(largest_contour,&moments1);
M00 = cvGetSpatialMoment(&moments1,0,0);
M10 = cvGetSpatialMoment(&moments1,1,0);
M01 = cvGetSpatialMoment(&moments1,0,1);
posX_Yellow = (int)(M10/M00);
posY_Yellow = (int)(M01/M00);
double theta = 0.5 * atan(
(2 * cvGetCentralMoment(&moments1, 1, 1)) /
(cvGetCentralMoment(&moments1, 2, 0) - cvGetCentralMoment(&moments1, 0, 2)));
theta = (theta / PI) * 180;
// fit an ellipse (and draw it)
if (largest_contour->total >= 6) // can only do an ellipse fit
// if we have > 6 points
{
CvBox2D box = cvFitEllipse2(largest_contour);
if ((box.size.width < imgYellowThresh->width) && (box.size.height < imgYellowThresh->height))
{
cvEllipseBox(imgYellowThresh, box, CV_RGB(255, 255 ,255), 3, 8, 0 );
}
}

Algorithm for Hue/Saturation Adjustment Layer from Photoshop

Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)

Resources