I have the RGB tuple of a pixel we'll call P.
(255, 0, 0) is the color of P with the alpha channel at 1.0.
With the alpha channel at 0.8, P's color becomes (255, 51, 51).
How can I get the color of the pixel that is influencing P's color?
Let's start from the beginning. A pixel with alpha only makes sense when it is blended with something else. If you have an upper layer U with alpha and a lower layer L that is totally opaque, the equation is:
P = (alpha * U) + ((1.0 - alpha) * L)
Rearranging the formula, you obtain:
L = (P - (alpha * U)) / (1.0 - alpha)
Obviously the equation doesn't make sense when the alpha is 1.0, as you'd be dividing by zero.
Plugging your numbers in reveals that R=255, G=255, and B=255 for the pixel L.
It is almost universal that the lowest layer in an image will be all white (255,255,255) by convention.
Just looking at the numbers you provided:
(1.0-0.8)*255 ~= 50.9 = 51
Where:
1.0 is the maximum alpha intensity
0.8 is the currently set alpha intensity
255 is the maximum intensity of each of the RGB channels (the color of the background)
This fits the B and G channels of your example.
So, in the general case, it seems to be a simple weighted average between the channel value (either of RGB) and the background color (in your case, white -- 255). Alpha is being used as the weight.
Here's some Python code:
MIN_ALPHA=0.0
MAX_ALPHA=1.0
MIN_CH=0
MAX_CH=255
BG_VAL=255
def apply_alpha(old, alpha, bg=255):
assert alpha >= MIN_ALPHA
assert alpha <= MAX_ALPHA
assert old >= MIN_CH
assert old <= MAX_CH
new = old*alpha + (MAX_ALPHA - alpha)*bg
return new
if __name__ == '__main__':
import sys
old, alpha = map(float, sys.argv[1:])
print apply_alpha(old, alpha)
And some output:
misha#misha-K42Jr:~/Desktop/stackoverflow$ python alpha.py 255 0.8
255.0
misha#misha-K42Jr:~/Desktop/stackoverflow$ python alpha.py 0 0.8
51.0
Try this for other examples (in particular, non-white backgrounds) -- it's probably that simple. If not, edit your answer with new examples, and I'll have another look.
Related
I am generating a heat map using an anomaly detection framework, and the output is something like this image.
the output is like this:
How can I mark the red regions on the original images? like a circle around the red part on the original image.
So let's say you have this heatmap.
It was actually generated from some intensity data, some prediction you obtained from your algorithm. That's what we need, not the heatmap itself. It is usually a "grayscale image", it has no color, only intensity values. Usually it is from 0.0 to 1.0 (could be also from 0 to 255) and if you were to plot it, it would like that.
So now to obtain "red" areas you just need regions with high intensity. We must do "thresholding" to obtain them.
max_val = 1.0 # could be 255 in your case, you must check
prediction /= max_val # normalize
mask = prediction > 0.9
Threshold in this case is 0.9, you can make it smaller to make "red" regions larger. We will get the following mask:
Now we can either blend this mask with our original image:
alpha = 0.5
original[mask] = original[mask] * (1 - alpha) + np.array([0, 0, 255]) * alpha
... and get this:
Or we can find some contours on the mask and encircle them:
contours, _ = cv2.findContours(mask.astype(np.uint8),
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
center = np.average(cnt, axis=0)
radius = np.max(np.linalg.norm((cnt - center)[:, 0], axis=1))
radius = max(radius, 10.0)
cv2.circle(original, center[0].astype(np.int32), int(radius), (0, 0, 255), 2)
... to get this:
I would like to use OpenCV to detect which rectangles in an image have a majority of pixels close to a given color.
Here's an example of an image I would like to process using this to identify rectangular regions that contain mostly gray pixels (possibly roads):
More precisely, given:
dimensions h x w (height and weight of candidate rectangles)
a distance function dist for colors (for example, the norm of the vector difference between the color vector, which could be RGB or any other representation)
a color vector C
a maximum distance d for colors to be from C
a minimum percentage rate r of pixels in a given rectangle to be within distance d from C for the rectangle to be of interest,
return a mask M in which each pixel P is 1 if the rectangle of size h x w left-cornered by P contains at least r % of its pixels within distance d from C when measured with dist.
In pseudo-code, pixel P in the mask is 1 if and only if:
def rectangle_left_cornered_at_P_is_of_interest(P):
n_pixels_near_C = size([P' for P' in rectangle(P, P + (h,w)) if dist(P',C) < d])
return n_pixels_near_C / (h * w) > r
I imagine there may already exist a filter/kernel that does just that (or can be used to do that) in OpenCV, but I am still learning about it and could not identify one by looking at the documentation. Is there such a thing?
You can use HSV for this . you may have to play with the values a bit for the mask but it will get the job done.
img = cv2.imread(img)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_gray = np.array([0, 5, 50], np.uint8)
upper_gray = np.array([350, 50, 255], np.uint8)
mask = cv2.inRange(hsv, lower_gray, upper_gray)
img_res = cv2.bitwise_and(img, img, mask = mask)
cv2.imwrite('gray.png',img_res)
You should also refer to this post. Its a good post on the use of HSV.
Basicly all you will need for this job will be :
HSV masks,
Otsu thresholding , blurs and may be erosion and dilation.
Use them in some combition that fits your requirement best.
I want to do something similar to the levels function in Photoshop, but can't find the right openCV functions.
Basically I want to stretch the greys in an image to go from almost white to practically black instead of from almost white to slightly greyer, while leaving white as white and black as black (I am using greyscale images).
The following python code fully implements Photoshop Adjustments -> Levels dialog.
Change the values for each channel to the desired ones.
img is input rgb image of np.uint8 type.
inBlack = np.array([0, 0, 0], dtype=np.float32)
inWhite = np.array([255, 255, 255], dtype=np.float32)
inGamma = np.array([1.0, 1.0, 1.0], dtype=np.float32)
outBlack = np.array([0, 0, 0], dtype=np.float32)
outWhite = np.array([255, 255, 255], dtype=np.float32)
img = np.clip( (img - inBlack) / (inWhite - inBlack), 0, 255 )
img = ( img ** (1/inGamma) ) * (outWhite - outBlack) + outBlack
img = np.clip( img, 0, 255).astype(np.uint8)
I think this is a function mapping input levels to output levels as shown below in the figure.
For example, the orange curve is a straight line from (a, c) to (b, d), blue curve is a straight line from (a, d) to (b, c) and green curve is a non-linear function from (a,c) to (b, d).
We can define the blue curve as (x - a)/(y - d) = (a - b)/(d - c).
Limiting values of a, b, c and d depend on the range supported by the channel that you are applying this transformation to. For gray scale this is [0, 255].
For example, if you want a transformation like (a, d) = (10, 200), (b, c) = (250, 50) for a gray scale image,
y = -150*(x-10)/240 + 200 for x [10, 250]
y = x for [0, 10) and (250, 255] if you want remaining values unchanged.
You can use a lookup table in OpenCV (LUT function) to calculate the output levels and apply this transformation to your image or the specific channel. You can apply any piecewise transformation this way.
I don't know what are the "Photoshop levels". But from description, I think you should try the following:
Convert your image to YUV using cvtColor. Y will represent the intensity plane. (You can also use Lab, Luv, or any similar colorspace with separate intensity component).
Split the planes using split, so that the intensity plane will be a separate image.
Call equalizeHist on the intensity plane
Merge the planes back together using merge
Details on histogram equalization can be found here
Also note, that there's an implementation of somewhat improved histogram equalization method - CLAHE (but I can't find a better link than this, also #berak suggested a good link on the topic)
I have been trying to make my own convolution operator instead of using the inbuilt one that comes with Java. I applied the inbuilt convolution operator on this image
link
using the inbuilt convolution operator with gaussian filter I got this image.
link
Now I run the same image using my code
public static int convolve(BufferedImage a,int x,int y){
int red=0,green=0,blue=0;
float[] matrix = {
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
0.2196956447338621f, 0.28209479177387814f, 0.2196956447338621f,
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
};
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += Math.round(((color >> 16) & 0xff)*matrix[(i-x)*3+j-y]);
green += Math.round(((color >> 8) & 0xff)*matrix[(i-x)*3+j-y]);
blue += Math.round(((color >> 0) & 0xff)*matrix[(i-x)*3+j-y]);
}
}
return (a.getRGB(x, y)&0xFF000000) | (red << 16) | (green << 8) | (blue);
}
And The result I got is this.
link
Also how do I optimize the code that I wrote. The inbuilt convolution operator takes 1 ~ 2 seconds while my code even if it is not serving the exact purpose as it is suppose to, is taking 5~7 seconds !
I accidentally rotated my source image while uploading. So please ignore that.
First of all, you are needlessly (and wrongly) converting your result from float to int at each cycle of the loop. Your red, green and blue should be of type float and should be cast back to integer only after the convolution (when converted back to RGB):
float red=0.0f, green = 0.0f, blue = 0.0f
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += ((color >> 16) & 0xff)*matrix[(i-x)*3+j-y];
green += ((color >> 8) & 0xff)*matrix[(i-x)*3+j-y];
blue += ((color >> 0) & 0xff)*matrix[(i-x)*3+j-y];
}
}
return (a.getRGB(x, y)&0xFF000000) | (((int)red) << 16) | (((int)green) << 8) | ((int)blue);
The bleeding of colors in your result is caused because your coefficients in matrix are wrong:
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f +
0.2196956447338621f + 0.28209479177387814f + 0.2196956447338621f +
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f =
1.8452741
The sum of the coefficients in a blurring convolution matrix should be 1.0. When you apply this matrix to an image you may get colors that are over 255. When that happens the channels "bleed" into the next channel (blue to green, etc.).
A completely green image with this matrix would result in:
green = 255 * 1.8452741 ~= 471 = 0x01D7;
rgb = 0xFF01D700;
Which is a less intense green with a hint of red.
You can fix that by dividing the coefficients by 1.8452741, but you want to make sure that:
(int)(255.0f * (sum of coefficients)) = 255
If not you need to add a check which limits the size of channels to 255 and don't let them wrap around. E.g.:
if (red > 255.0f)
red = 255.0f;
Regarding efficiency/optimization:
It could be that the difference in speed may be explained by this needless casting and calling Math.Round, but a more likely candidate is the way you are accessing the image. I'm not familiar enough with BufferedImage and Raster to advice you on the most efficient way to access the underlying image buffer.
Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)