Color over grayscale image - image-processing

I want to color gray-scale image with only one color. So I have for example pixel: RGB(34,34,34) and I want to color it with color: RGB(200,100,50) to get new RGB pixel. So I new to do this for every pixel in image.
The white pixels get color: RGB(200,100,50), darker pixels get darker color than RGB(200,100,50).
So the result is gray-scale with black and selected color instead of black and white.
I will program this hard core without any built in function.
Similar to this: Image or this:Image

All you need to do is use the ratio of gray to white as a multiplier to your color. I think you'll find that this gives better results than a blend.
new_red = gray * target_red / 255
new_green = gray * target_green / 255
new_blue = gray * target_blue / 255

From what you describe I figure you look for a blending algorithm.
What you need is a blendingPercentage (bP).
new red = red1 * bP + red2 * (1 - bP)
new green = green1 * bP + green2 * (1 - bP)
new blue = blue1 * bP + blue2 * (1 - bP)
Your base color is RGB 34 34 34; Color to blend is RGB 200 100 50
BlendingPercentage for example = 50% -> 0.5
Therefore:
New red = 34 * 0.5 + 200 * (1 - 0.5) = 117
New green = 34 * 0.5 + 100 * (1 - 0.5) = 67
New blue = 34 * 0.5 + 50 * (1 - 0.5) = 42

Related

How can I color this Koch-Star in with the turtle module?

How can I change the colour of my Turtle and of the Koch-Star it draws? I want it to not be black and be coloured in. Maybe have a Blue outline and fill in the shape with a light blue.
trtl : turtle object
The turtle window object to be drawed to
lenght : float
The length of the line Koch is drawed to
currentdepth : integer
The current iteration depth of Koch - 1 is straight line of length run
if currentdepth == depth:
trtl.forward(length)
else:
currentlength = length/3.0
koch_segment(trtl, currentlength,currentdepth + 1)
trtl.left(60)
koch_segment(trtl, currentlength, currentdepth + 1)
trtl.right(120)
koch_segment(trtl, currentlength, currentdepth + 1)
trtl.left(60)
koch_segment(trtl, currentlength, currentdepth + 1)
def setup_turtle(depth):
wn = turtle.Screen()
wx = wn.window_width() * .5
wh = wn.window_height() * .5
base_lgth = 2.0 / math.sqrt(3.0) * wh # is the base length dependant on the screen size
myturtle = turtle.Turtle()
myturtle.speed(0.5 * (depth + 1)) # value between 1 and 10 (fast)
myturtle.penup()
myturtle.setposition(-wx / 2, -wh / 2) # start in the lower left quadrant middle point
myturtle.pendown()
myturtle.left(60)
return myturtle, base_lgth```

Avoid light color in random UIColor

I'm generating random uicolor. I want to avoid light colors like yellow, light green etc... Here's my code
+ (UIColor *)generateRandom {
CGFloat hue = ( arc4random() % 256 / 256.0 ); // 0.0 to 1.0
CGFloat saturation = ( arc4random() % 128 / 256.0 ) + 0.5; // 0.5 to 1.0, away from white
CGFloat brightness = ( arc4random() % 128 / 256.0 ) + 0.5; // 0.5 to 1.0, away from black
return [UIColor colorWithHue:hue saturation:saturation brightness:brightness alpha:1];
}
I'm using this for uitableviewcell background color. Cell's textLabel color is white. So if the background color is light green or some other light color its not visible clearly...
How to fix this? Can we avoid generating light colors or can we find which is light color?
If we can find that is light color means I can change the textcolor to some other color...
It sounds like you want to avoid colors close to white. Since you're already in HSV space, this should be a simple matter of setting a distance from white to avoid. A simple implementation would limit the saturation and brightness to be no closer than some threshold. Something like:
if (saturation < kSatThreshold)
{
saturation = kSatThreshold;
}
if (brightness > kBrightnessThreshold)
{
brightness = kBrightnessThreshold;
}
Something more sophisticated would be to check the distance from white and if it's too close, push it back out:
CGFloat deltaH = hue - kWhiteHue;
CGFloat deltaS = saturation - kWhiteSaturation;
CGFloat deltaB = brightness - kWhiteBrightness;
CGFloat distance = sqrt(deltaH * deltaH + deltaS * deltaS + deltaB * deltaB);
if (distance < kDistanceThreshold)
{
// normalize distance vector
deltaH /= distance;
deltaS /= distance;
deltaB /= distance;
hue = kWhiteHue + deltaH * kDistanceThreshold;
saturation = kWhiteSaturation + deltaS * kDistanceThreshold;
brightness = kWhiteBrightness + deltaB * kDistanceThreshold;
}
Light colors are those with high brightness (or lightness, luminosity...).
Generate colors with random hue and saturation, but limit the randomness of brightness to low numbers, like 0 to 0.5. Or keep the brightness constant. If you are showing the colors side by side, the aesthetic impact is usually better if you change only 2 of the 3 components in HSB (HSV, HSL)

Pi live video color detection

I'm planning to create an ambilight effect behind my TV. I want to achieve this by using a camera pointed at my TV. I think the easiest way is using a simple ip-camera. I need color detection to detect the colors on the screen and translate this to rgb values on the led strip.
I have a Raspberry Pi as hub in the middle of my house. I was thinking about using it like this
Ip camera pointed at my screen Process the video on the pi and translate it to rgb values and send it to mqtt server. Behind my TV receive the colors on my nodeMCU.
How can I detect colors on a live stream (on multiple points) on my pi?
If you can create any background colour the best approach might be calculating k-means or median to get "the most popular" colours. If the ambient light can be different in different places then using ROI at the image edges you can check what colour is dominant in this area (by comparing number of samples of different colours).
If you have only limited colours (e.g. only R, G and B) then you can simply check which channel has highest intensity in desired region.
I wrote the code with an assumption that you can create any RGB ambient color.
As a test image I use this one:
The code is:
import cv2
import numpy as np
# Read an input image (in your case this will be an image from the camera)
img = cv2.imread('saul2.png ', cv2.IMREAD_COLOR)
# The block_size defines how big the patches around an image are
# the more LEDs you have and the more segments you want, the lower block_size can be
block_size = 60
# Get dimensions of an image
height, width, chan = img.shape
# Calculate number of patches along height and width
h_steps = height / block_size
w_steps = width / block_size
# In one loop I calculate both: left and right ambient or top and bottom
ambient_patch1 = np.zeros((60, 60, 3))
ambient_patch2 = np.zeros((60, 60, 3))
# Create output image (just for visualization
# there will be an input image in the middle, 10px black border and ambient color)
output = cv2.copyMakeBorder(img, 70, 70, 70, 70, cv2.BORDER_CONSTANT, value = 0)
for i in range(h_steps):
# Get left and right region of an image
left_roi = img[i * 60 : (i + 1) * 60, 0 : 60]
right_roi = img[i * 60 : (i + 1) * 60, -61 : -1]
left_med = np.median(left_roi, (0, 1)) # This is an actual RGB color for given block (on the left)
right_med = np.median(right_roi, (0, 1)) # and on the right
# Create patch having an ambient color - this is just for visualization
ambient_patch1[:, :] = left_med
ambient_patch2[:, :] = right_med
# Put it in the output image (the additional 70 is because input image is in the middle (shifted by 70px)
output[70 + i * 60 : 70+ (i + 1) * 60, 0 : 60] = ambient_patch1
output[70 + i * 60 : 70+ (i + 1) * 60, -61: -1] = ambient_patch2
for i in range(w_steps):
# Get top and bottom region of an image
top_roi = img[0 : 60, i * 60 : (i + 1) * 60]
bottom_roi = img[-61 : -1, i * 60: (i + 1) * 60]
top_med = np.median(top_roi, (0, 1)) # This is an actual RGB color for given block (on top)
bottom_med = np.median(bottom_roi, (0, 1)) # and bottom
# Create patch having an ambient color - this is just for visualization
ambient_patch1[:, :] = top_med
ambient_patch2[:, :] = bottom_med
# Put it in the output image (the additional 70 is because input image is in the middle (shifted by 70px)
output[0 : 60, 70 + i * 60 : 70 + (i + 1) * 60] = ambient_patch1
output[-61: -1, 70 + i * 60 : 70 + (i + 1) * 60] = ambient_patch2
# Save output image
cv2.imwrite('saul_output.png', output)
And this gives a result as follows:
I hope this helps!
EDIT:
And the two more examples:

How to ensure that random UIColors are visible on a white background in iOS

I'm using this code to generate a random UIColor for UILabel text in an ios app with a white background.
Problem is that some of the text turns out to be invisible against the white background.
How would you modify the code to ensure that any color chosen would be reasonably visible on a white background.
This question is as much about colors as it is programming.
+ (UIColor *) randomColor {
CGFloat red = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat blue = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat green = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
return [UIColor colorWithRed:red green:green blue:blue alpha:1.0];
}
What I would do is determine the "gray" level of the RGB value. If it's "too close to white", then try again.
A formula I've used is:
float gray = 0.299 * red + 0.587 * green + 0.114 * blue;
This gives 0 for black and 1 for white. Pick a threshold such as 0.6 or whatever works for you.
+ (UIColor *) randomColor {
while (1) {
CGFloat red = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat blue = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat green = (CGFloat)arc4random()/(CGFloat)RAND_MAX;
CGFloat gray = 0.299 * red + 0.587 * green + 0.114 * blue;
if (gray < 0.6) {
return [UIColor colorWithRed:red green:green blue:blue alpha:1.0];
}
}
}
white is 1, 1,1. I think you can hack it so that as you generate colors you do multiple tries till you values are not to close to 1.0 1.0 1.0. really if one was .5 the rest were close to 1.0 your fine. statistically you should reach a color that meets the rules in a few tries. You would through inspection have to see what threshold is acceptable.
I think Maddy's answer is the best answer you've gotten.
Another, similar approach would be to calculate the Pythagorean distance between your color and the background color. Pseudocode, where the two colors are red1/green1/blue1 and red2/blue2/green2
CGFloat difference = sqrt( (red1 - red2)^2 + (green1 - green2)^2 +
(blue1 - blue2)^2 );
That approach would let you find colors that are different from any arbitrary background color. To allow for human color perception, you could also use the weight values from Maddy's grayscale formula:
CGFloat difference = sqrt( 0.299 * (red1 - red2)^2 + 0.587 * (green1 - green2)^2 +
0.114 * (blue1 - blue2)^2 );
Better still would be to convert the RGB values to LAB color space and compare the distance between colors in LAB space, but that's way beyond the scope of this discussion.
The (R,G,B) turns to white when the values are closer to 255, 255, 255. So, make your random number generator to not to generate too close numbers to 255 for any of these colors.
Hope that helps.

Algorithm for Hue/Saturation Adjustment Layer from Photoshop

Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)

Resources