I'm trying to understand these calculations for YUV420P to RGB conversion on an OpenGL fragment shader. On https://en.wikipedia.org/wiki/YUV there are lots of calculations but none of them look like the one below. Why take 0.0625 and 0.5 and 0.5 in the first part? And where did the second part come from?
yuv.r = texture(tex_y, TexCoord).r - 0.0625;
yuv.g = texture(tex_u, TexCoord).r - 0.5;
yuv.b = texture(tex_v, TexCoord).r - 0.5;
rgba.r = yuv.r + 1.596 * yuv.b
rgba.g = yuv.r - 0.813 * yuv.b - 0.391 * yuv.g;
rgba.b = yuv.r + 2.018 * yuv.g;
It may be an special color conversion for some specific YUV color scheme but I couldn't find anything on the internet.
Why take [...] 0.5 and 0.5 in the first part?
U and V are stored in the green and blue color channel of the texture. The values in the color channels are stored in the range [0.0, 1.0]. For the computations the values have to be in mapped to the range [-0.5, 0.5]:
yuv.g = texture(tex_u, TexCoord).r - 0.5;
yuv.b = texture(tex_v, TexCoord).r - 0.5;
Subtracting 0.0625 from the red color channel is just an optimization. Thus, It does not have to be subtracted separately in each expression later.
The algorithm is the same as in How to convert RGB -> YUV -> RGB (both ways) or various books.
In my app, I randomly generate a color via the following function:
UIColor *origionalRandomColor = [UIColor
colorWithRed:arc4random_uniform(255) / 255.0
green:arc4random_uniform(255) / 255.0
blue:arc4random_uniform(255) / 255.0
alpha:1.0];
And I want to generate a similar color to the above that is also random. I want to use a constant to determine how similar the new color is to the old color.
I've been trying to do this by first taking the red value, generating a small random number, and randomly choose to add or subtract it to form a new color. Then repeating the process for green and blue. And then I can reassemble the new similar color.
In the following code, counter is an int. When counter is a 1, I want the difference to be more pronounced than it is when counter is 20 for example.
I'm trying to do it like this:
CGFloat red = 0.0, green = 0.0, blue = 0.0, alpha =0.0;
[origionalRandomColor getRed:&red green:&green blue:&blue alpha:&alpha];
//Randomly generates a 0 or 1
//0 results in subtracting - 1 results in adding the value
int AddSubtract = arc4random() %2;
// double val = 20 - couter;
// val = val/10 - 1;
// if (val < .2) {
// val = .2;
// }
// float x = (arc4random() % 100)/(float)100;
// NSLog(#"**********%f", x);
// x = x/((float)counter/100);
// NSLog(#"----------%f", x);
float x = (20-counter)/10;
NSLog(#"----------%f", x);
if (AddSubtract == 0) //subtract the val
red -= x;
else //add the val
red += x;
//Then repeated for green/blue
UIColor *newColor = [UIColor colorWithRed:red green:green blue:blue alpha:1.0];
The problem I'm having with the above is it that it is generating new colors that are drastically different than the original color. The original color will be a shade of green, and the new color will be a bright purple. When I NSLog the values, I'm getting crazy numbers, so clearly something is going awry.
Thanks in advance!
You have
float x = (20-counter)/10;
Given counter is an int, (20-counter)/10 can only be 0, 1 or 2.
You have to add a type-cast:
float x = (float)(20-counter) / 10;
or easier
float x = (20f - counter) / 10f;
I'm using this code to generate a random UIColor for UILabel text in an ios app with a white background.
Problem is that some of the text turns out to be invisible against the white background.
How would you modify the code to ensure that any color chosen would be reasonably visible on a white background.
This question is as much about colors as it is programming.
+ (UIColor *) randomColor {
CGFloat red = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat blue = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat green = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
return [UIColor colorWithRed:red green:green blue:blue alpha:1.0];
}
What I would do is determine the "gray" level of the RGB value. If it's "too close to white", then try again.
A formula I've used is:
float gray = 0.299 * red + 0.587 * green + 0.114 * blue;
This gives 0 for black and 1 for white. Pick a threshold such as 0.6 or whatever works for you.
+ (UIColor *) randomColor {
while (1) {
CGFloat red = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat blue = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat green = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat gray = 0.299 * red + 0.587 * green + 0.114 * blue;
if (gray < 0.6) {
return [UIColor colorWithRed:red green:green blue:blue alpha:1.0];
}
}
}
white is 1, 1,1. I think you can hack it so that as you generate colors you do multiple tries till you values are not to close to 1.0 1.0 1.0. really if one was .5 the rest were close to 1.0 your fine. statistically you should reach a color that meets the rules in a few tries. You would through inspection have to see what threshold is acceptable.
I think Maddy's answer is the best answer you've gotten.
Another, similar approach would be to calculate the Pythagorean distance between your color and the background color. Pseudocode, where the two colors are red1/green1/blue1 and red2/blue2/green2
CGFloat difference = sqrt( (red1 - red2)^2 + (green1 - green2)^2 +
(blue1 - blue2)^2 );
That approach would let you find colors that are different from any arbitrary background color. To allow for human color perception, you could also use the weight values from Maddy's grayscale formula:
CGFloat difference = sqrt( 0.299 * (red1 - red2)^2 + 0.587 * (green1 - green2)^2 +
0.114 * (blue1 - blue2)^2 );
Better still would be to convert the RGB values to LAB color space and compare the distance between colors in LAB space, but that's way beyond the scope of this discussion.
The (R,G,B) turns to white when the values are closer to 255, 255, 255. So, make your random number generator to not to generate too close numbers to 255 for any of these colors.
Hope that helps.
I'm using CGLayers to implement a "painting" technique similar to Photoshop airbrush - and have run into something strange. When I use transparency and overpaint an area, the color never reaches full intensity (if the alpha value is below 0.5). My application uses a circular "airbrush" pattern with opacity fall off at the edges but I have reproduced the problem just using a semi-transparent white square. When the opacity level is less than 0.5, the overpainted area never reaches the pure white of the source layer. I probably wouldn't have noticed but I'm using the result of the painting as a mask, and not being able to get pure white causes problems. Any ideas what's going on here? Target iOS SDK 5.1.
Below is the resultant color after drawing the semi-transparent square many times over black background:
opacity color
------ -----
1.0 255
0.9 255
0.8 255
0.7 255
0.6 255
0.5 255
0.4 254
0.3 253
0.2 252
0.1 247
Simplified code that shows the issue:
- (void)drawRect:(CGRect)rect
{
CGContextRef viewContext = UIGraphicsGetCurrentContext();
// Create grey gradient to compare final blend color
CGRect lineRect = CGRectMake(20, 20, 1, 400);
float greyLevel = 1.0;
for(int i=0;i<728;i++)
{
CGContextSetRGBFillColor(viewContext, greyLevel, greyLevel, greyLevel, 1);
CGContextFillRect(viewContext, lineRect);
lineRect.origin.x += 1;
greyLevel -= 0.0001;
}
// Create semi-transparent white square
CGSize whiteSquareSize = CGSizeMake(40, 40);
CGLayerRef whiteSquareLayer = CGLayerCreateWithContext (viewContext, whiteSquareSize, NULL);
CGContextRef whiteSquareContext = CGLayerGetContext(whiteSquareLayer);
CGContextSetAlpha(whiteSquareContext, 1.0f); // just to make sure
CGContextSetRGBFillColor(whiteSquareContext, 1, 1, 1, 0.3); // ??? color never reaches pure white if alpha < 0.5
CGRect whiteSquareRect = CGRectMake(0, 0, whiteSquareSize.width, whiteSquareSize.height);
CGContextFillRect(whiteSquareContext, whiteSquareRect);
// "Paint" with layer a bazillion times
CGContextSetBlendMode(viewContext, kCGBlendModeNormal); // just to make sure
CGContextSetAlpha(viewContext, 1.0); // just to make sure
for(int strokeNum=0;strokeNum<100;strokeNum++)
{
CGPoint drawPoint = CGPointMake(0, 400);
for(int x=0;x<730;x++)
{
CGContextDrawLayerAtPoint(viewContext, drawPoint, whiteSquareLayer);
drawPoint.x++;
}
}
}
Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)