Hue and Saturation range in openCV seems conflicting - opencv

As far as i know, the hue and saturation range are 0 to 180 and 0 to 255 for hue and saturation respectively.
But in histogram comparison exmaple in openCV docs, they have taken the following:
// hue varies from 0 to 256, saturation from 0 to 180
float h_ranges[] = { 0, 256 };
float s_ranges[] = { 0, 180 };
Shouldn't it be the reversed case?

yes, you're right. it's a bug.
// hue varies from 0 to 180, saturation from 0 to 256
float h_ranges[] = { 0, 180 };
float s_ranges[] = { 0, 256 };
(the sample in cpp/tutorials does the right thing actually)
[edit] will be fixed soon.

Related

JavaCV findContours outlining the image instead of finding the contour

I am trying to find that whether there is any rectangle/square present inside my area of interest. Here is what I have achieved till now.
Below is the region of interest which I snipped out of the original image using JavaCV.
Mat areaOfInterest = OpenCVUtils.getRegionOfInterest("image.jpg",295,200,23,25);
public static Mat getRegionOfInterest(String filePath , int x, int y, int width, int height){
Mat roi = null;
try{
Mat image = Imgcodecs.imread(filePath);
Rect region_of_interest= new Rect(x,y,width,height);
roi = image.submat(region_of_interest);
}catch (Exception ex){
}
return roi;
}
Now I'm trying to find whether there is any rectangle present in the area of interest. I have used following lines of code to detect that as well.
Mat gray = new Mat();
Mat binary = new Mat();
Mat hierarchy = new Mat();
ArrayList<MatOfPoint> contours = new ArrayList<>();
cvtColor(image,gray,COLOR_BGR2GRAY);
Core.bitwise_not(gray,binary);
findContours(binary,contours,hierarchy,RETR_EXTERNAL,CHAIN_APPROX_NONE);
if(contours.size() > 0){
for (MatOfPoint contour:contours) {
Rect rect = boundingRect(contour);
/// x = 0, y = 1 , w = 2, h =3
Point p1 = new Point(rect.x,rect.y);
Point p2 = new Point(rect.width + rect.x, rect.height+rect.y);
rectangle(image,p1,p2,new Scalar(0,0,255));
Imgcodecs.imwrite("F:\\rect.png",image);
}
}
But instead of finding the the square inside the image it is outlining the parts of the image as following.
It would be great if someone pushes me in the right direction.
OpenCV's findContours() treats the input image as binary, where everything that is 0 is black, and any pixel >0 is white. Since you're reading a jpg image, the compression makes it so that most white pixels aren't exactly white, and most black pixels aren't exactly black. Thus, if you have an input image like:
3 4 252 250 3 1
3 3 247 250 3 2
3 2 250 250 2 2
4 4 252 250 3 1
3 3 247 250 3 2
3 2 250 250 2 2
then findContours() will just outline the whole thing, since to it it's equivalent to all being 255 (they're all > 0).
All you need to do is binarize the image with something like threshold() or inRange(), so that your image actually comes out to
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
Then you'd correctly get the outline of the 255 block in the center.

Avoid light color in random UIColor

I'm generating random uicolor. I want to avoid light colors like yellow, light green etc... Here's my code
+ (UIColor *)generateRandom {
CGFloat hue = ( arc4random() % 256 / 256.0 ); // 0.0 to 1.0
CGFloat saturation = ( arc4random() % 128 / 256.0 ) + 0.5; // 0.5 to 1.0, away from white
CGFloat brightness = ( arc4random() % 128 / 256.0 ) + 0.5; // 0.5 to 1.0, away from black
return [UIColor colorWithHue:hue saturation:saturation brightness:brightness alpha:1];
}
I'm using this for uitableviewcell background color. Cell's textLabel color is white. So if the background color is light green or some other light color its not visible clearly...
How to fix this? Can we avoid generating light colors or can we find which is light color?
If we can find that is light color means I can change the textcolor to some other color...
It sounds like you want to avoid colors close to white. Since you're already in HSV space, this should be a simple matter of setting a distance from white to avoid. A simple implementation would limit the saturation and brightness to be no closer than some threshold. Something like:
if (saturation < kSatThreshold)
{
saturation = kSatThreshold;
}
if (brightness > kBrightnessThreshold)
{
brightness = kBrightnessThreshold;
}
Something more sophisticated would be to check the distance from white and if it's too close, push it back out:
CGFloat deltaH = hue - kWhiteHue;
CGFloat deltaS = saturation - kWhiteSaturation;
CGFloat deltaB = brightness - kWhiteBrightness;
CGFloat distance = sqrt(deltaH * deltaH + deltaS * deltaS + deltaB * deltaB);
if (distance < kDistanceThreshold)
{
// normalize distance vector
deltaH /= distance;
deltaS /= distance;
deltaB /= distance;
hue = kWhiteHue + deltaH * kDistanceThreshold;
saturation = kWhiteSaturation + deltaS * kDistanceThreshold;
brightness = kWhiteBrightness + deltaB * kDistanceThreshold;
}
Light colors are those with high brightness (or lightness, luminosity...).
Generate colors with random hue and saturation, but limit the randomness of brightness to low numbers, like 0 to 0.5. Or keep the brightness constant. If you are showing the colors side by side, the aesthetic impact is usually better if you change only 2 of the 3 components in HSB (HSV, HSL)

Filter for red hue - emgucv/opencv

How do I filter an image for red hue? I understand that red lies around zero between 330° and 30° (represented by 165 to 15 in OpenCV?). How can I use that range with the InRange method as there is an overflow at 360° (180 in OpenCV)?
Im detecting HUE colour using the following code:
Mat img_hsv, dst ;
cap >> image;
cvtColor(image, img_hsv, CV_RGB2HSV);
inRange(img_hsv, Scalar(110, 130, 100), Scalar(140, 255, 255), dst );
where dst is Mat of the same size as img_hsv and CV_8U type.
And your scalars determine the filtered colour. In my case its:
HUE from 110 to 140
SAT from 130 to 255
VAL from 100 to 255
more info here:
OpenCV 2.4 InRange()
I'm not sure about using a hue that overflows the 180 range but I think you can calculate them separately and then add the resulting Mats.

FFT Convolution - Really low PSNR

I'm convoluting an image (512*512) with a FFT filter (kernelsize=10), it looks good.
But when I compare it with an image which I convoluted the normal way the result was horrible.
The PSNR is about 35.
67,187/262,144 Pixel values have a difference of 1 or more(peak at ~8) (having a max pixel value of 255).
My question is, is it normal when convoluting in frequency space or might there be a problem with my convolution/transforming functions? . Because the strange thing is that I should get better results when using double as data-type. But it stays COMPLETELY the same.
When I transform an image into frequency space, DON'T convolute it, then transform it back it's fine and the PSNR is about 140 when using float.
Also, due to the pixel differences being only 1-10 I think I can rule out scaling errors
EDIT: More Details for bored interested people
I use the open source kissFFT library. With real 2dimensional input (kiss_fftndr.h)
My Image Datatype is PixelMatrix. Simply a matrix with alpha, red, green and blue values from 0.0 to 1.0 float
My kernel is also a PixelMatrix.
Here some snippets from the Convolution function
Used datatypes:
#define kiss_fft_scalar float
#define kiss_fft_cpx struct {
kiss_fft_scalar r;
kiss_fft_scalar i,
}
Configuration of the FFT:
//parameters to kiss_fftndr_alloc:
//1st param = array with the size of the 2 dimensions (in my case dim={width, height})
//2nd param = count of the dimensions (in my case 2)
//3rd param = 0 or 1 (forward or inverse FFT)
//4th and 5th params are not relevant
kiss_fftndr_cfg stf = kiss_fftndr_alloc(dim, 2, 0, 0, 0);
kiss_fftndr_cfg sti = kiss_fftndr_alloc(dim, 2, 1, 0, 0);
Padding and transforming the kernel:
I make a new array:
kiss_fft_scalar kernel[width*height];
I fill it with 0 in a loop.
Then I fill the middle of this array with the kernel I want to use.
So if I would use a 2*2 kernel with values 1/4, 1/4, 1/4 and 1/4 it would look like
0 0 0 0 0 0
0 1/4 1/4 0
0 1/4 1/4 0
0 0 0 0 0 0
The zeros are padded until they reach the size of the image.
Then I swap the quadrants of the image diagonally. It looks like:
1/4 0 0 1/4
0 0 0 0
0 0 0 0
1/4 0 0 1/4
now I transform it: kiss_fftndr(stf, floatKernel, outkernel);
outkernel is declarated as
kiss_fft_cpx outkernel= new kiss_fft_cpx[width*height]
Getting the colors into arrays:
kiss_fft_scalar *red = new kiss_fft_scalar[width*height];
kiss_fft_scalar *green = new kiss_fft_scalar[width*height];
kiss_fft-scalar *blue = new kiss_fft_scalar[width*height];
for(int i=0; i<height; i++) {
for(int j=0; i<width; j++) {
red[i*height+j] = input.get(j,i).getRed(); //input is the input image pixel matrix
green[i*height+j] = input.get(j,i).getGreen();
blue{i*height+j] = input.get(j,i).getBlue();
}
}
Then I transform the arrays:
kiss_fftndr(stf, red, outred);
kiss_fftndr(stf, green, outgreen);
kiss_fftndr(stf, blue, outblue); //the out-arrays are type kiss_fft_cpx*
The convolution:
What we have now:
3 transformed color arrays from type kiss_fft_cpx*
1 transformed kernel array from type kiss_fft_cpx*
They are both complex arrays
Now comes the convolution:
for(int m=0; m<til; m++) {
for(int n=0; n<til; n++) {
kiss_fft_scalar real = outcolor[m*til+n].r; //I do that for all 3 arrys in my code!
kiss_fft_scalar imag = outcolor[m*til+n].i; //so I have realred, realgreen, realblue
kiss_fft_scalar realMask = outkernel[m*til+n].r; // and imagred, imaggreen, etc.
kiss_fft_scalar imagMask = outkernel[m*til+n].i;
outcolor[m*til+n].r = real * realMask - imag * imagMask; //Same thing here in my code i
outcolor[m*til+n].i = real * imagMask + imag * realMask; //do it with all 3 colors
}
}
Now I transform them back:
kiss_fftndri(sti, outred, red);
kiss_fftndri(sti, outgreen, green);
kiss_fftndri(sti, outblue, blue);
and I create a new Pixel Matrix with the values from the color-arrays
PixelMatrix output;
for(int i=0; i<height; i++) {
for(int j=0; j<width; j++) {
Pixel p = new Pixel();
p.setRed( red[i*height+j] / (width*height) ); //I divide through (width*height) because of the scaling happening in the FFT;
p.setGreen( green[i*height+j] );
p.setBlue( blue[i*height+j] );
output.set(j , i , p);
}
}
Notes:
I already take care in advance that the image has a size with a power of 2 (256*256), (512*512) etc.
Examples:
kernelsize: 10
Input:
Output:
Output from normal convolution:
my console says :
142519 out of 262144 Pixels have a difference of 1 or more (maxRGB = 255)
PSNR: 32.006027221679688
MSE: 44.116752624511719
though for my eyes they look the same °.°
Maybe one person is bored and goes through the code. It's not urgent, but it's a kind of problem I just want to know what the hell I did wrong ^^
Last, but not least, my PSNR function, though I don't really think that's the problem :D
void calculateThePSNR(const PixelMatrix first, const PixelMatrix second, float* avgpsnr, float* avgmse) {
int height = first.getHeight();
int width = first.getWidth();
BMP firstOutput;
BMP secondOutput;
firstOutput.SetSize(width, height);
secondOutput.SetSize(width, height);
double rsum=0.0, gsum=0.0, bsum=0.0;
int count = 0;
int total = 0;
for(int i=0; i<height; i++) {
for(int j=0; j<width; j++) {
Pixel pixOne = first.get(j,i);
Pixel pixTwo = second.get(j,i);
double redOne = pixOne.getRed()*255;
double greenOne = pixOne.getGreen()*255;
double blueOne = pixOne.getBlue()*255;
double redTwo = pixTwo.getRed()*255;
double greenTwo = pixTwo.getGreen()*255;
double blueTwo = pixTwo.getBlue()*255;
firstOutput(j,i)->Red = redOne;
firstOutput(j,i)->Green = greenOne;
firstOutput(j,i)->Blue = blueOne;
secondOutput(j,i)->Red = redTwo;
secondOutput(j,i)->Green = greenTwo;
secondOutput(j,i)->Blue = blueTwo;
if((redOne-redTwo) > 1.0 || (redOne-redTwo) < -1.0) {
count++;
}
total++;
rsum += (redOne - redTwo) * (redOne - redTwo);
gsum += (greenOne - greenTwo) * (greenOne - greenTwo);
bsum += (blueOne - blueTwo) * (blueOne - blueTwo);
}
}
fprintf(stderr, "%d out of %d Pixels have a difference of 1 or more (maxRGB = 255)", count, total);
double rmse = rsum/(height*width);
double gmse = gsum/(height*width);
double bmse = bsum/(height*width);
double rpsnr = 20 * log10(255/sqrt(rmse));
double gpsnr = 20 * log10(255/sqrt(gmse));
double bpsnr = 20 * log10(255/sqrt(bmse));
firstOutput.WriteToFile("test.bmp");
secondOutput.WriteToFile("test2.bmp");
system("display test.bmp");
system("display test2.bmp");
*avgmse = (rmse + gmse + bmse)/3;
*avgpsnr = (rpsnr + gpsnr + bpsnr)/3;
}
Phonon had the right idea. Your images are shifted. If you shift your image by (1,1), then the MSE will be approximately zero (provided that you mask or crop the images accordingly). I confirmed this using the code (Python + OpenCV) below.
import cv
import sys
import math
def main():
fname1, fname2 = sys.argv[1:]
im1 = cv.LoadImage(fname1)
im2 = cv.LoadImage(fname2)
tmp = cv.CreateImage(cv.GetSize(im1), cv.IPL_DEPTH_8U, im1.nChannels)
cv.AbsDiff(im1, im2, tmp)
cv.Mul(tmp, tmp, tmp)
mse = cv.Avg(tmp)
print 'MSE:', mse
psnr = [ 10*math.log(255**2/m, 10) for m in mse[:-1] ]
print 'PSNR:', psnr
if __name__ == '__main__':
main()
Output:
MSE: (0.027584912741602553, 0.026742391458366047, 0.028147870144492403, 0.0)
PSNR: [63.724087463606452, 63.858801190963192, 63.636348220531396]
My advice for you to try to implement the following code :
A=double(inputS(1:10:length(inputS))); %segmentation
A(:)=-A(:);
%process the image or signal by fast fourior transformation and inverse fft
fresult=fft(inputS);
fresult(1:round(length(inputS)*2/fs))=0;
fresult(end-round(length(fresult)*2/fs):end)=0;
Y=real(ifft(fresult));
that's code help you to obtain the same size image and good for remove DC component ,the you can to convolution.

Algorithm for Hue/Saturation Adjustment Layer from Photoshop

Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)

Resources