Create a darker color - actionscript

I'm developing an aplication that loads a dynamic color from a XML file. In some ocasions, I need the aplication to get a similar but darker color from the hexadecimal I have. Each value would go down 33 in hex (51 in decimal). Something like:
0xFFFFFF would become 0xCCCCCC
0x6600CC would become 0x330099
The hex values I have are strings.
I just can't figure out how to solve it in a simple way.
Please, help!
And remember, it's AS2!

You should search for one of the many colour libraries out there. If you want to do it yourself, you need some understanding of bitwise operators. I think reducing the colours by some percent instead of subtracting could give you a nicer result.
var color:int = parseInt(colorString);
// use the shift operator to get individual colour values
var red:int = (color>> 16) & 0xFF;
var green:int = (color>> 8) & 0xFF;
var blue:int = color & 0xFF;
// change colours by subtracting. Todo: make sure colours are between 0 and 255
/*
red -= 0x33;
green -= 0x33;
blue -= 0x33; */
// make colours darker by 10%
red *= 0.9;
green *= 0.9;
blue *= 0.9;
// combine individual colours
color = (red << 16) | (green << 8) | blue;

Related

Android bitmap operations using ndk

Currently I'm developing an Android application that involves some image processing. After some research I have made I found that is better to use Android NDK for bitmap manipulation for a good performance. So, I found some basic examples like this one:
static void myFunction(AndroidBitmapInfo* info, void* pixels){
int xx, yy, red, green, blue;
uint32_t* line;
for(yy = 0; yy < info->height; yy++){
line = (uint32_t*)pixels;
for(xx =0; xx < info->width; xx++){
//extract the RGB values from the pixel
blue = (int) ((line[xx] & 0x00FF0000) >> 16);
green = (int)((line[xx] & 0x0000FF00) >> 8);
red = (int) (line[xx] & 0x00000FF );
//change the RGB values
// set the new pixel back in
line[xx] =
((blue << 16) & 0x00FF0000) |
((green << 8) & 0x0000FF00) |
(red & 0x000000FF);
}
pixels = (char*)pixels + info->stride;
}
}
I used this code and it works very well for basic operations, but I want to make a more complex one, like a filter, where I need to access the above and below pixels from the current pixel. To be more specific, I'll give you an example: for dilation and erosion operations we move through pixels and we verify if the pixels from north west, north, north east, west, east, south west, south and south east (for 8 neighbors structure element) are object pixels. What I need to know is how can I access the values of the north and south pixels using the above code.
I'm not very familiarized with image processing using C (pointers etc.).
Thanks!
I've edit your function a little, basically to get the pixel position in the array the formula is:
position = y*width+x
static void myFunction(AndroidBitmapInfo* info, void* pixels){
int xx, yy, red, green, blue;
uint32_t* px = pixels;
for(yy = 0; yy < info->height; yy++){
for(xx =0; xx < info->width; xx++){
int position = yy*info->width+xx;//this formula gives you the address of pixel with coordinates (xx; yy) in 'px'
//extract the RGB values from the pixel
blue = (int) ((line[position] & 0x00FF0000) >> 16);
green = (int)((line[position] & 0x0000FF00) >> 8);
red = (int) (line[position] & 0x00000FF );
//change the RGB values
// set the new pixel back in
line[position] =
((blue << 16) & 0x00FF0000) |
((green << 8) & 0x0000FF00) |
(red & 0x000000FF);
//so the position of the south pixel is (yy+1)*info->width+xx
//and the one of the north is (yy-1)*info->width+xx
//the left yy*info->width+xx-1
//the right yy*info->width+xx+1
}
}
assuming you want to read/edit a pixel with coordinates x,y
you must check if 0 <= y < height and if 0 <= x < width , otherwise you may access unexisting pixels and you will have a memory access error

Convolution operator yielding spectrum of colors

I have been trying to make my own convolution operator instead of using the inbuilt one that comes with Java. I applied the inbuilt convolution operator on this image
link
using the inbuilt convolution operator with gaussian filter I got this image.
link
Now I run the same image using my code
public static int convolve(BufferedImage a,int x,int y){
int red=0,green=0,blue=0;
float[] matrix = {
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
0.2196956447338621f, 0.28209479177387814f, 0.2196956447338621f,
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
};
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += Math.round(((color >> 16) & 0xff)*matrix[(i-x)*3+j-y]);
green += Math.round(((color >> 8) & 0xff)*matrix[(i-x)*3+j-y]);
blue += Math.round(((color >> 0) & 0xff)*matrix[(i-x)*3+j-y]);
}
}
return (a.getRGB(x, y)&0xFF000000) | (red << 16) | (green << 8) | (blue);
}
And The result I got is this.
link
Also how do I optimize the code that I wrote. The inbuilt convolution operator takes 1 ~ 2 seconds while my code even if it is not serving the exact purpose as it is suppose to, is taking 5~7 seconds !
I accidentally rotated my source image while uploading. So please ignore that.
First of all, you are needlessly (and wrongly) converting your result from float to int at each cycle of the loop. Your red, green and blue should be of type float and should be cast back to integer only after the convolution (when converted back to RGB):
float red=0.0f, green = 0.0f, blue = 0.0f
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += ((color >> 16) & 0xff)*matrix[(i-x)*3+j-y];
green += ((color >> 8) & 0xff)*matrix[(i-x)*3+j-y];
blue += ((color >> 0) & 0xff)*matrix[(i-x)*3+j-y];
}
}
return (a.getRGB(x, y)&0xFF000000) | (((int)red) << 16) | (((int)green) << 8) | ((int)blue);
The bleeding of colors in your result is caused because your coefficients in matrix are wrong:
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f +
0.2196956447338621f + 0.28209479177387814f + 0.2196956447338621f +
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f =
1.8452741
The sum of the coefficients in a blurring convolution matrix should be 1.0. When you apply this matrix to an image you may get colors that are over 255. When that happens the channels "bleed" into the next channel (blue to green, etc.).
A completely green image with this matrix would result in:
green = 255 * 1.8452741 ~= 471 = 0x01D7;
rgb = 0xFF01D700;
Which is a less intense green with a hint of red.
You can fix that by dividing the coefficients by 1.8452741, but you want to make sure that:
(int)(255.0f * (sum of coefficients)) = 255
If not you need to add a check which limits the size of channels to 255 and don't let them wrap around. E.g.:
if (red > 255.0f)
red = 255.0f;
Regarding efficiency/optimization:
It could be that the difference in speed may be explained by this needless casting and calling Math.Round, but a more likely candidate is the way you are accessing the image. I'm not familiar enough with BufferedImage and Raster to advice you on the most efficient way to access the underlying image buffer.

Using logical bitshift for RGB values

I'm a bit naive when it comes to bitwise logic and I have what is probably a simple question... basically if I have this (is ActionScript but can apply in many languages):
var color:uint = myObject.color;
var red:uint = color >>> 16;
var green:uint = color >>> 8 & 0xFF;
var blue:uint = color & 0xFF;
I was wondering what exactly the `& 0xFF' is doing to green and blue. I understand what an AND operation does, but why is it needed (or a good idea) here?
The source for this code was here: http://alexgblog.com/?p=680
Appreciate the tips.
Alex
In RGB you have 8 bits for Red, 8 bits for Green and 8 bits for Blue. You are storing the 3 bytes in an int, which has 4 bytes in this way:
Bits from 0-7(least significant) for Blue.
Bits from 8-15(least significant) for Green.
Bits from 16-23(least significant) for Red.
To extract them to separate values you need to right shift the correct number of bits in order to put the byte corresponding to the color you want to extract in the least significant byte of the int, and then put the rest of the int in 0 so as to let that byte value only. The last part is done by using the AND operation with mask 0xFF. The AND leaves the only byte of the int where it is applied with the same value, leaving the rest bytes in 0.
This is what happens:
var color:uint = myObject.color;
You have the color variable like this: 0x00RRGGBB
var red:uint = color >>> 16;
Right-shifting 16 bits color results in: 0x000000RR, resulting in the red value.
But for:
var green:uint = color >>> 8 & 0xFF;
After right-shifting color 8 bits it leaves this result 0x0000RRGG, but here we only need the GG bits left so we apply the AND operation with the mask 0xFF or to be more clear 0x000000FF, as you show know AND leaves the old bits values where the mask is 1 and zeros there the mask is 0, so the result of doing 0x0000RRGG & 0x000000FF = 0x000000GG which is the value for green. The same is applied to extract the blue value.

Need help understanding Alpha Channels

I have the RGB tuple of a pixel we'll call P.
(255, 0, 0) is the color of P with the alpha channel at 1.0.
With the alpha channel at 0.8, P's color becomes (255, 51, 51).
How can I get the color of the pixel that is influencing P's color?
Let's start from the beginning. A pixel with alpha only makes sense when it is blended with something else. If you have an upper layer U with alpha and a lower layer L that is totally opaque, the equation is:
P = (alpha * U) + ((1.0 - alpha) * L)
Rearranging the formula, you obtain:
L = (P - (alpha * U)) / (1.0 - alpha)
Obviously the equation doesn't make sense when the alpha is 1.0, as you'd be dividing by zero.
Plugging your numbers in reveals that R=255, G=255, and B=255 for the pixel L.
It is almost universal that the lowest layer in an image will be all white (255,255,255) by convention.
Just looking at the numbers you provided:
(1.0-0.8)*255 ~= 50.9 = 51
Where:
1.0 is the maximum alpha intensity
0.8 is the currently set alpha intensity
255 is the maximum intensity of each of the RGB channels (the color of the background)
This fits the B and G channels of your example.
So, in the general case, it seems to be a simple weighted average between the channel value (either of RGB) and the background color (in your case, white -- 255). Alpha is being used as the weight.
Here's some Python code:
MIN_ALPHA=0.0
MAX_ALPHA=1.0
MIN_CH=0
MAX_CH=255
BG_VAL=255
def apply_alpha(old, alpha, bg=255):
assert alpha >= MIN_ALPHA
assert alpha <= MAX_ALPHA
assert old >= MIN_CH
assert old <= MAX_CH
new = old*alpha + (MAX_ALPHA - alpha)*bg
return new
if __name__ == '__main__':
import sys
old, alpha = map(float, sys.argv[1:])
print apply_alpha(old, alpha)
And some output:
misha#misha-K42Jr:~/Desktop/stackoverflow$ python alpha.py 255 0.8
255.0
misha#misha-K42Jr:~/Desktop/stackoverflow$ python alpha.py 0 0.8
51.0
Try this for other examples (in particular, non-white backgrounds) -- it's probably that simple. If not, edit your answer with new examples, and I'll have another look.

Algorithm for Hue/Saturation Adjustment Layer from Photoshop

Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)

Resources