I would like to perform something similar to Vignette, but instead of Darken the edges, I would like to make a transparent gradient of the edges. I am not looking for a solution using widget.
Any idea how should I modify the code below? Or is it even the same thing at all?? I am sorry but I really badly need this.
Thanks a lot
Note: The code below is from the Image library
https://github.com/brendan-duncan/image
Image vignette(Image src, {num start = 0.3, num end = 0.75, num amount = 0.8}) {
final h = src.height - 1;
final w = src.width - 1;
final num invAmt = 1.0 - amount;
final p = src.getBytes();
for (var y = 0, i = 0; y <= h; ++y) {
final num dy = 0.5 - (y / h);
for (var x = 0; x <= w; ++x, i += 4) {
final num dx = 0.5 - (x / w);
num d = sqrt(dx * dx + dy * dy);
d = _smoothStep(end, start, d);
p[i] = clamp255((amount * p[i] * d + invAmt * p[i]).toInt());
p[i + 1] = clamp255((amount * p[i + 1] * d + invAmt * p[i + 1]).toInt());
p[i + 2] = clamp255((amount * p[i + 2] * d + invAmt * p[i + 2]).toInt());
}
}
return src;
}
num _smoothStep(num edge0, num edge1, num x) {
x = ((x - edge0) / (edge1 - edge0));
if (x < 0.0) {
x = 0.0;
}
if (x > 1.0) {
x = 1.0;
}
return x * x * (3.0 - 2.0 * x);
}
Solution
This code works without any widgets. Actually it doesn't use any of the flutter libraries and is solely based on dart and the image package you
introduced in your question.
The code contains comments which may not make a lot of sense until you read the explanation. The code is as following:
vignette.dart
import 'dart:isolate';
import 'dart:typed_data';
import 'package:image/image.dart';
import 'dart:math' as math;
class VignetteParam {
final Uint8List file;
final SendPort sendPort;
final double fraction;
VignetteParam(this.file, this.sendPort, this.fraction);
}
void decodeIsolate(VignetteParam param) {
Image image = decodeImage(param.file.buffer.asUint8List())!;
Image crop = copyCropCircle(image); // crop the image with builtin function
int r = crop.height~/2; // radius is half the height
int rs = r*r; // radius squared
int tr = (param.fraction * r).floor(); // we apply the fraction to the radius to get tr
int ors = (r-tr)*(r-tr); // ors from diagram
x: for (int x = -r; x <= r; x++) { // iterate across all columns of pixels after shifting x by -r
for (int y = -r; y <= r; y++) { // iterate across all rows of pixels after shifting y by -r
int pos = x*x + y*y; // which is (r')² (diagram)
if (pos <= rs) { // if we are inside the outer circle, then ...
if (pos > ors) { // if we are outside the inner circle, then ...
double f = (rs-pos) / (rs-ors); // calculate the fraction of the alpha value
int c = setAlpha(
crop.getPixelSafe(x+r, y+r),
(0xFF * f).floor()
); // calculate the new color by changing the alpha
crop.setPixelSafe(x+r, y+r, c); // set the new color
} else { // if we reach the inner circle from the top then jump down
y = y*-1;
}
} else { // if we are outside the outer circle and ...
if (y<0) { // above it and jump down to the outer circle
y = -(math.sin(math.acos(x/r)) * r).floor()-1;
}
else continue x; // or if beneath it then jump to the next column
}
}
}
param.sendPort.send(crop);
}
Future<Uint8List> getImage(Uint8List bytes, double radiusFraction) async {
var receivePort = ReceivePort();
await Isolate.spawn(
decodeIsolate,
VignetteParam(bytes, receivePort.sendPort, radiusFraction)
);
Image image = await receivePort.first;
return encodePng(image) as Uint8List;
}
main.dart (example app)
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:flutter_playground/vignette.dart';
import 'package:flutter/services.dart' show ByteData, rootBundle;
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
MyApp({super.key});
#override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Material App',
home: Scaffold(
appBar: AppBar(
title: const Text('Material App Bar'),
),
body: Center(
child: FutureBuilder<Uint8List>(
future: imageFuture(),
builder: (context, snapshot) {
switch (snapshot.connectionState) {
case ConnectionState.done:
return Image.memory(
snapshot.data!,
);
default:
return CircularProgressIndicator();
}
}
),
),
),
);
}
Future<Uint8List> imageFuture() async {
// Load your file here and call getImage
ByteData byteData = await rootBundle.load("assets/image.jpeg");
return getImage(byteData.buffer.asUint8List(), 0.3);
}
}
Explanation
The math behind this algorithm is very simple. It's only based on the equation of a circle. But first of all, have a look at this diagram:
Diagram
The diagram contains the square which is our image. Inside the square we can see the circle which is the visible part of the image. The opaque area is fully opaque while the transparent are gets more transparent (= less alpha) if we get closer to the outer circle. r (radius) is the radius of the outer circle, tr (transparent radius) is the part of the radius which is in the transparent area. That's why r-tr is the radius of the inner circle.
In order to apply this diagram to our needs we have to shift our x-Axis and y-Axis. The image package has a grid which starts with (0,0) at the top left corner. But we need (0,0) at the center of the image, hence we shift it by the radius r. If you look close you may notice that our diagram doesn't look as usual. The y-Axis usually goes up, but in our case it really doesn't matter and makes things easier.
Calculation of the position
We need to iterate across all pixels inside the transparent area and
change the alpha value. The equation of a circle, which is x'²+y'²=r'², helps us to figure out whether the pixel is in the transparent area. (x',y') is the coordinate of the pixel and r' is the distance between the pixel and the origin. Is the distance behind the inner circle and before the outer circle, then r' <= r and r' > r-tr must hold. In order to avoid calculating the square root, we instead compare the squared values, so pos <= rs and pos > ors must hold. (Look at the diagram to see what pos, rs, and ors are.)
Changing the alpha value
The pixel is inside the transparent area. How do we change the alpha value? We need a linear gradient transparency, so we calculate the distance between the actual position pos and the outer circle, which is rs-pos. The longer the distance, the more alpha we need for this pixel. The total distance is rs-ors, so we calculate (rs-pos) / (rs-ors) to get the fraction of our alpha value f. Finally we change our alpha value of this pixel with f.
Optimization
This algorithm actually does the whole job. But we can optimize it. I wrote that we have to iterate across all pixels inside the transparent area. So, we don't need the pixels outside of the outer circle and inside the inner circle, as their alpha value don't change. But we have two for loops iterating through all pixels from left to right and from top to bottom. Well, when we reach the inner circle from outside, we can jump down by negating the y-Position. Hence, we set y = y*-1; if we are inside the outer circle (pos <= rs) but not outside the inner circle (pos <= ors) anymore.
What if we are above the outer circle (pos > rs)? Well then we can jump to the outer circle by calculating the y-Position with sine and arccosine. I won't go much into detail here, but if you want further explanation, let me know by commenting below. The if (y<0) just determines if we are above the outer circle (y is negative) or beneath it. If above then jump down, if beneath jump to the next column of pixels. Hence we 'continue' the x for loop.
Here you go - the widgetless approach based on my previous answer:
Canvas(PictureRecorder()).drawImage(
your_image, //here is the image you want to change
Offset.zero,// the offset from the corner of the canvas
Paint()
..shader = const RadialGradient(
radius: needed_radius, // the radius of the result gradient - it should depend on the image dimens
colors: [Colors.black, Colors.transparent],
).createShader(
Rect.fromLTRB(0, 0, your_image_width, your_image_height), // the portion of your image that should be influenced by the shader - in this case, I use the whole image.
)
..blendMode = BlendMode.dstIn); // for the black color of the gradient to be masking one
I will add it to the previous answer, also
Given that you want "transparency", you need the alpha channel. The code you provide seems to only have 3 bytes per pixel, so only RGB, without alpha channel.
A solution may be:
Modify the code such that it has alpha channel, i.e. 4 bytes per pixel.
Instead of modifying the RGB to make it darker, i.e. p[i] = ..., p[i+1] = ..., p[i+2] = ..., leave RGB unchanged, and modifying the alpha channel to make alpha smaller. For example, say, p[i+3]=... (suppose you are RGBA format instead of ARGB).
Related
Recently I had the idea to make a pendulum out of points using Processing, and with a little learning I solved it easily:
int contador = 0;
int curvatura = 2;
float pendulo;
void setup(){
size(300,300);
}
void draw(){
background(100);
contador = (contador + 1) % 360; //"CONTADOR" GOES FROM 0 TO 359
pendulo = sin(radians(contador))*curvatura; //"PENDULO" EQUALS THE SIN OF CONTADOR, SO IT GOES FROM 1 TO -1 REPEATEDLY, THEN IS MULTIPLIED TO EMPHASIZE OR REDUCE THE CURVATURE OF THE LINE.
tallo(width/2,height/3);
println(pendulo);
}
void tallo (int x, int y){ //THE FUNTION TO DRAW THE DOTTED LINE
pushMatrix();
translate(x,y);
float _y = 0.0;
for(int i = 0; i < 25; i++){ //CREATES THE POINTS SEQUENCE.
ellipse(0,0,5,5);
_y+=5;
rotate(radians(pendulo)); //ROTATE THEM ON EACH ITERATION, THIS MAKES THE SPIRAL.
}
popMatrix();
}
So, in a brief, what I did was a function that changed every point position with the rotate fuction, and then I just had to draw the ellipses in the origin coordinates as that is the real thing that changes position and creates the pendulum ilussion.
[capture example, I just need 2 more points if you are so gentile :)]
[capture example]
[capture example]
Everything was OK that far. The problem appeared when I tried to replace the ellipses for a path made of vertices. The problem is obvious: the path is never (visually) made because all vertices would be 0,0 as they move along with the zero coordinates.
So, in order to make the path possible, I need the absolute values for each vertex; and there's the question: How do I get them?
What I know I have to do is to remove the transform functions, create the variables for the X and Y position and update them inside the for, but then what? That's why I cleared this is a maths issue, which operation I have to add in the X and Y variables in order to make the path and its curvature possible?
void tallo (int x, int y){
pushMatrix();
translate(x,y);
//NOW WE START WITH THE CHANGES. LET'S DECLARE THE VARIABLES FOR THE COORDINATES
float _x = 0.0;
float _y = 0.0;
beginShape();
for(int i = 0; i < 25; i++){ //CREATES THE DOTS.
vertex(_x,_y); //CHANGING TO VERTICES AND CALLING THE NEW VARIABLES, OK.
//rotate(radians(pendulo)); <--- HERE IS MY PROBLEM. HOW DO I CONVERT THIS INTO X AND Y COORDINATES?
//_x = _x + ????;
_y = _y + 5 /* + ???? */;
}
endShape();
popMatrix();
}
We need to have in mind that pendulo's x and y values changes in each iteration of the for, it doesn't has to add the same quantity each time. The addition must be progressive. Otherwise, we would see a straight line rotating instead of a curve accentuating its curvature (if you increase curvatura's value to a number greater than 20, you will notice the spiral)
So, rotating the coordinates was a great solution to it, now it's kind of a muddle to think the mathematical solution to the x and y coordinates for the spiral, my secondary's knowledges aren't enough. I know I have to create another variable inside the for in order to do this progression, but what operation should it have?
I would be really glad to know, maths
You could use simple trigonometry. You know the angle and the hypotenuse, so you use cos to get the relative x position, and sin to the y. The position would be relative to the central point.
But before i explain in detail and draw some explanations, let me propose another solution: PVectors
void setup() {
size(400,400);
frameRate(60);
center = new PVector(width/2, height/3); //defined here because width and height only are set after size()
}
void draw() {
background(255);
fill(0);
stroke(0);
angle = arc_magn*sin( (float) frameCount/60 );
draw_pendulum( center );
}
PVector center;
float angle = 0;
float arc_magn = HALF_PI;
float wire_length = 150;
float rotation_angle = PI/20 /60 ; //we divide it by 60 so the first part is the rotation in one second
void draw_pendulum(PVector origin){
PVector temp_vect = PVector.fromAngle( angle + HALF_PI);
temp_vect.setMag(wire_length);
PVector final_pos = new PVector(origin.x+temp_vect.x, origin.y+temp_vect.y );
ellipse( final_pos.x, final_pos.y, 40, 40);
line(origin.x, origin.y, final_pos.x, final_pos.y);
}
You use PVector class static method fromAngle( float angle ) that returns a unity vector of the given angle, then use .setMag() to define it's length.
Those PVector methods will take care of the trigonometry for you.
If you still want to know the math behind it, i can make another example.
I have a bit-map image:
( However this should work with any arbitrary image )
I want to take my image and make it a 3D SCNNode. I've accomplished that much with this code. That takes each pixel in the image and creates a SCNNode with a SCNBox geometry.
static inline SCNNode* NodeFromSprite(const UIImage* image) {
SCNNode *node = [SCNNode node];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
for (int x = 0; x < image.size.width; x++)
{
for (int y = 0; y < image.size.height; y++)
{
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 alpha = data[pixelInfo + 3];
if (alpha > 3)
{
UInt8 red = data[pixelInfo];
UInt8 green = data[pixelInfo + 1];
UInt8 blue = data[pixelInfo + 2];
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
SCNNode *pixel = [SCNNode node];
pixel.geometry = [SCNBox boxWithWidth:1.001 height:1.001 length:1.001 chamferRadius:0];
pixel.geometry.firstMaterial.diffuse.contents = color;
pixel.position = SCNVector3Make(x - image.size.width / 2.0,
y - image.size.height / 2.0,
0);
[node addChildNode:pixel];
}
}
}
CFRelease(pixelData);
node = [node flattenedClone];
//The image is upside down and I have no idea why.
node.rotation = SCNVector4Make(1, 0, 0, M_PI);
return node;
}
But the problem is that what I'm doing takes up way to much memory!
I'm trying to find a way to do this with less memory.
All Code and resources can be found at:
https://github.com/KonradWright/KNodeFromSprite
Now you drawing each pixel as SCNBox of certain color, that means:
one GL draw per box
drawing of unnecessary two invisible faces between adjancent boxes
drawing N of same 1x1x1 boxes in a row when one box of 1x1xN can be drawn
Seems like common Minecraft-like optimization problem:
Treat your image is 3-dimensional array (where depth is wanted image extrusion depth), each element representing cube voxel of certain color.
Use greedy meshing algorithm (demo) and custom SCNGeometry to create mesh for SceneKit node.
Pseudo-code for meshing algorithm that skips faces of adjancent cubes (simplier, but less effective than greedy meshing):
#define SIZE_X = 16; // image width
#define SIZE_Y = 16; // image height
// pixel data, 0 = transparent pixel
int data[SIZE_X][SIZE_Y];
// check if there is non-transparent neighbour at x, y
BOOL has_neighbour(x, y) {
if (x < 0 || x >= SIZE_X || y < 0 || y >= SIZE_Y || data[x][y] == 0)
return NO; // out of dimensions or transparent
else
return YES;
}
void add_face(x, y orientation, color) {
// add face at (x, y) with specified color and orientation = TOP, BOTTOM, LEFT, RIGHT, FRONT, BACK
// can be (easier and slower) implemented with SCNPlane's: https://developer.apple.com/library/mac/documentation/SceneKit/Reference/SCNPlane_Class/index.html#//apple_ref/doc/uid/TP40012010-CLSCHSCNPlane-SW8
// or (harder and faster) using Custom Geometry: https://github.com/d-ronnqvist/blogpost-codesample-CustomGeometry/blob/master/CustomGeometry/CustomGeometryView.m#L84
}
for (x = 0; x < SIZE_X; x++) {
for (y = 0; y < SIZE_Y; y++) {
int color = data[x][y];
// skip current pixel is transparent
if (color == 0)
continue;
// check neighbour at top
if (! has_neighbour(x, y + 1))
add_face(x,y, TOP, );
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, BOTTOM);
// check neighbour at bottom
if (! has_neighbour(x - 1, y))
add_face(x,y, LEFT);
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, RIGHT);
// since array is 2D, front and back faces is always visible for non-transparent pixels
add_face(x,y, FRONT);
add_face(x,y, BACK);
}
}
A lot of depends on input image. If it is not big and without wide variety of colors, it I would go with SCNNode adding SCNPlane's for visible faces and then flattenedClone()ing result.
An approach similar to the one proposed by Ef Dot:
To keep the number of draw calls as small as possible you want to keep the number of materials as small as possible. Here you will want one SCNMaterial per color.
To keep the number of draw calls as small as possible make sure that no two geometry elements (SCNGeometryElement) use the same material. In other words, use one geometry element per material (color).
So you will have to build a SCNGeometry that has N geometry elements and N materials where N is the number of distinct colors in your image.
For each color in you image build a polygon (or group of disjoint polygons) from all the pixels of that color
Triangulate each polygon (or group of polygons) and build a geometry element with that triangulation.
Build the geometry from the geometry elements.
If you don't feel comfortable with triangulating the polygons yourself your can leverage SCNShape.
For each polygon (or group of polygons) create a single UIBezierPath and a build a SCNShape with that.
Merge all the geometry sources of your shapes in a single source, and reuse the geometry elements to create a custom SCNGeometry
Note that some vertices will be duplicated if you use a collection of SCNShapes to build the geometry. With little effort you can make sure that no two vertices in your final source have the same position. Update the indexes in the geometry elements accordingly.
I can also direct you to this excellent GitHub repo by Nick Lockwood:
https://github.com/nicklockwood/FPSControls
It will show you how to generate the meshes as planes (instead of cubes) which is a fast way to achieve what you need for simple scenes using a "neighboring" check.
If you need large complex scenes, then I suggest you go for the solution proposed by Ef Dot using a greedy meshing algorithm.
This is the function to draw rectangle with providing respective values for parameters
void rectangle(Mat& img, Point pt1, Point pt2,const Scalar& color, int thickness=1,int lineType=8, int shift=0);
Users can use this function to set ROI with mouse , to draw rectangle on detected matches in Templte Matching application.
My Question is , 2nd and 3rd parameters are Points here. If user want to get point 1 nd point 2 values for further processing , How to get that ?! How to print the both point values?! Point to double or int conversion ?!
Anyone ,clear my doubts. Thanks in advance for help !!
Updated:
void mouseHandler(int event, int x, int y, int flags, void* param)
{
if (event == CV_EVENT_LBUTTONDOWN && !drag)
{
/* left button clicked. ROI selection begins */
point1 = Point(x,y);
drag = 1;
}
if (event == CV_EVENT_MOUSEMOVE && drag)
{
/* mouse dragged. ROI being selected */
Mat img1 = mod_tempimg.clone();
point2 = Point(x, y);
rectangle(img1, point1, point2, CV_RGB(255, 0, 0), 1, 8, 0);
imshow("image", img1);
}
if (event == CV_EVENT_LBUTTONUP && drag)
{
Mat img2=mod_tempimg.clone();
point2 = Point(x, y);
rect = Rect(point1.x,point1.y,x-point1.x,y-point1.y);
drag = 0;
roiImg = mod_tempimg(rect1);
imshow("image", img2);
}
if (event == CV_EVENT_LBUTTONUP)
{
/* ROI selected */
select_flag = 1;
drag = 0;
}
In the above code ,
How to retrieve Point values from this line?!
rect = Rect(point1.x,point1.y,x-point1.x,y-point1.y);
If I know the values that will helpful to find the angle of rect .
even after update, the question is not clear to me. I am not sure what exactly you are asking.
Anyways, as far I understand, you are creating a rectangle object here:
rect = Rect(point1.x,point1.y,x-point1.x,y-point1.y);
and you want to get the corner points of rect later.
rect.tl() gives the top left corner point and rect.br() gives the bottom right corner point. You can also get the x and y values of a corner by : rect.tl().x or rect.br().y
I do not know what you mean by "find angle of rect". Rectangles have 90 degree angles.
when you are writing the program for drawing rectangle with 2 points, you have the points in hand.
Print the point: cout << pt1
Print x value and y value of the point : cout << pt1.x << pt1.y
assign x value explicitly : pt1.x = 0
Get the pixel intensity at some point : image.at<uchar>( pt1) [ for grayscale image]
I'm trying to find the orientation of a binary image (where orientation is defined to be the axis of least moment of inertia, i.e. least second moment of area). I'm using Dr. Horn's book (MIT) on Robot Vision which can be found here as reference.
Using OpenCV, here is my function, where a, b, and c are the second moments of area as found on page 15 of the pdf above (page 60 of the text):
Point3d findCenterAndOrientation(const Mat& src)
{
Moments m = cv::moments(src, true);
double cen_x = m.m10/m.m00; //Centers are right
double cen_y = m.m01/m.m00;
double a = m.m20-m.m00*cen_x*cen_x;
double b = 2*m.m11-m.m00*(cen_x*cen_x+cen_y*cen_y);
double c = m.m02-m.m00*cen_y*cen_y;
double theta = a==c?0:atan2(b, a-c)/2.0;
return Point3d(cen_x, cen_y, theta);
}
OpenCV calculates the second moments around the origin (0,0) so I have to use the Parallel Axis Theorem to move the axis to the center of the shape, mr^2.
The center looks right when I call
Point3d p = findCenterAndOrientation(src);
rectangle(src, Point(p.x-1,p.y-1), Point(p.x+1, p.y+1), Scalar(0.25), 1);
But when I try to draw the axis with lowest moment of inertia, using this function, it looks completely wrong:
line(src, (Point(p.x,p.y)-Point(100*cos(p.z), 100*sin(p.z))), (Point(p.x, p.y)+Point(100*cos(p.z), 100*sin(p.z))), Scalar(0.5), 1);
Here are some examples of input and output:
(I'd expect it to be vertical)
(I'd expect it to be horizontal)
I worked with the orientation sometimes back and coded the following. It returns me the exact orientation of the object. largest_contour is the shape that is detected.
CvMoments moments1,cenmoments1;
double M00, M01, M10;
cvMoments(largest_contour,&moments1);
M00 = cvGetSpatialMoment(&moments1,0,0);
M10 = cvGetSpatialMoment(&moments1,1,0);
M01 = cvGetSpatialMoment(&moments1,0,1);
posX_Yellow = (int)(M10/M00);
posY_Yellow = (int)(M01/M00);
double theta = 0.5 * atan(
(2 * cvGetCentralMoment(&moments1, 1, 1)) /
(cvGetCentralMoment(&moments1, 2, 0) - cvGetCentralMoment(&moments1, 0, 2)));
theta = (theta / PI) * 180;
// fit an ellipse (and draw it)
if (largest_contour->total >= 6) // can only do an ellipse fit
// if we have > 6 points
{
CvBox2D box = cvFitEllipse2(largest_contour);
if ((box.size.width < imgYellowThresh->width) && (box.size.height < imgYellowThresh->height))
{
cvEllipseBox(imgYellowThresh, box, CV_RGB(255, 255 ,255), 3, 8, 0 );
}
}
Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)