CIAreaHistogram inputScale factor - histogram

I'm building an application that uses the CIAreaHistogram Core Image filter. I use an inputCount value (number of buckets) of 10 for testing, and an inputScale value of 1.
I get the CIImage for the histogram itself, which I then run through a custom kernel (see end of post) to set alpha values to 1 (since otherwise the alpha value from the histogram calculations is premultiplied) and then convert it to an NSBitmapImageRep.
I then scan through the image rep's buffer and print the RGB values (skipping the alpha values). However, when I do this, the sum of the R, G, and B values across the 10 do not necessarily add up to 255.
For example, with a fully black image, I apply the histogram, then the custom kernel, and get the following output:
RGB: 255 255 255
RGB: 0 0 0
RGB: 0 0 0
RGB: 0 0 0
RGB: 0 0 0
RGB: 0 0 0
RGB: 0 0 0
RGB: 0 0 0
RGB: 0 0 0
RGB: 0 0 0
This is as I expect, since all pixels are black, so everything is in the first bucket. However, if I run the same algorithm with a color image, I get the following:
RGB: 98 76 81
RGB: 164 97 87
RGB: 136 161 69
RGB: 100 156 135
RGB: 80 85 185
RGB: 43 34 45
RGB: 31 19 8
RGB: 19 7 3
RGB: 12 5 2
RGB: 16 11 11
Add up the values for R, G, and B - they don't add up to 255. This causes problems because I need to compare two of these histograms, and my algorithm expects the sums to be between 0 and 255. I could obviously scale these values, but I want to avoid that extra step for performance reasons.
I noticed something else interesting that might give some clue as to why this is happening. In my custom kernel, I simply set the alpha value to 1. I tried a second kernel (see end of post) that sets all pixels to red. Clearly, green and blue values are zero. However, I get this result when checking the values from the bitmap rep:
RGB: 255 43 25
But I just set G and B to zero! This seems to be part of the problem, which indicates color management. But since I explicitly set the values in the kernel, there's only one block of code where this can be happening - the conversion to an NSBitmapImageRep from the CIImage from the filter:
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCIImage:kernelOutput];
unsigned char *buf = [bitmapRep bitmapData];
Once I set the pixels to RGB 255 0 0, then execute those lines, then read the buffer, the RGB values are all 255 43 25. I have further tried setting the color space of the original CGImageRef on which the entire workflow is based to kCGColorSpaceGenericRGB, thinking the color profile may be carrying through, but to no avail.
Can anyone tell me why a CIFilter kernel would behave this way, and how I could solve it?
As mentioned before, here are copies of the CIFilter kernel functions I use. First, the one that sets alpha to 1:
kernel vec4 adjustHistogram(sampler src)
{
vec4 pix = sample(src, destCoord());
pix.a = 1.0;
return pix;
}
And next, the one that sets all pixels to RGB 255 0 0 but that ends up 255 43 25 once it converts to NSBitmapImageRep:
kernel vec4 adjustHistogram(sampler src)
{
vec4 pix = sample(src, destCoord());
pix.r = 1.0; pix.g = 0.0; pix.b = 0.0;
pix.a = 1.0;
return pix;
}
Thanks in advance for your help.

You only need one line of code to generate and display a histogram when using a custom Core Image filter (or whenever you are creating a new CIImage object or are replacing an existing one):
return [CIFilter filterWithName:#"CIHistogramDisplayFilter" keysAndValues:kCIInputImageKey, self.inputImage, #"inputHeight", #100.0, #"inputHighLimit", #1.0, #"inputLowLimit", #0.0, nil].outputImage;

Related

How to find a single RGB and opacity value from RGB values of a view against black and white backgrounds

I want to build a keyboard extension which looks similar to iPhone's native keyboard in both dark and light mode. However, I have a hard time finding the RGB and opacity values of keyboard keys that match iOS keyboard against both black and white backgrounds. I have pictures of the keyboard against light and dark background below. How can I find the RGB and opacity values given these two images? With color picker, I get that the color of keyboard keys against the light background is RGB(150, 150, 150) with opacity 1 and RGB(107, 107, 107) against the dark background with opacity 1. I need a single RGB and an opacity value so that under a light background it would be equivalent to RGB(150, 150, 150) and to RGB(107, 107, 107) under a dark background.
Assuming grayscale colours with simple alpha compositing, let...
Gc = gray level of composited result
Gb = gray level of background
Gk = gray level of keycap
Ak = alpha level of keycap
then
Gc = Gk + Gb * (1 - Ak)
(Here, Ae is assumed to be between 0 and 1.)
Now some known levels (as I measure in your screenshot):
For light mode,
Gb = 106
Gc = 150
For dark mode,
Gb = 64
Gc = 151
From that, we can derive:
150 = Gk + 106 * (1 - Ak)
106 = Gk + 43 * (1 - Ak)
From that, you can derive Ak:
150 - 106 = (106 - 43) * (1 - Ak)
hence Ak = 0.3016
As a level from 1 to 255, Ak = 77.
And then you can derive Gk = 76.

JavaCV findContours outlining the image instead of finding the contour

I am trying to find that whether there is any rectangle/square present inside my area of interest. Here is what I have achieved till now.
Below is the region of interest which I snipped out of the original image using JavaCV.
Mat areaOfInterest = OpenCVUtils.getRegionOfInterest("image.jpg",295,200,23,25);
public static Mat getRegionOfInterest(String filePath , int x, int y, int width, int height){
Mat roi = null;
try{
Mat image = Imgcodecs.imread(filePath);
Rect region_of_interest= new Rect(x,y,width,height);
roi = image.submat(region_of_interest);
}catch (Exception ex){
}
return roi;
}
Now I'm trying to find whether there is any rectangle present in the area of interest. I have used following lines of code to detect that as well.
Mat gray = new Mat();
Mat binary = new Mat();
Mat hierarchy = new Mat();
ArrayList<MatOfPoint> contours = new ArrayList<>();
cvtColor(image,gray,COLOR_BGR2GRAY);
Core.bitwise_not(gray,binary);
findContours(binary,contours,hierarchy,RETR_EXTERNAL,CHAIN_APPROX_NONE);
if(contours.size() > 0){
for (MatOfPoint contour:contours) {
Rect rect = boundingRect(contour);
/// x = 0, y = 1 , w = 2, h =3
Point p1 = new Point(rect.x,rect.y);
Point p2 = new Point(rect.width + rect.x, rect.height+rect.y);
rectangle(image,p1,p2,new Scalar(0,0,255));
Imgcodecs.imwrite("F:\\rect.png",image);
}
}
But instead of finding the the square inside the image it is outlining the parts of the image as following.
It would be great if someone pushes me in the right direction.
OpenCV's findContours() treats the input image as binary, where everything that is 0 is black, and any pixel >0 is white. Since you're reading a jpg image, the compression makes it so that most white pixels aren't exactly white, and most black pixels aren't exactly black. Thus, if you have an input image like:
3 4 252 250 3 1
3 3 247 250 3 2
3 2 250 250 2 2
4 4 252 250 3 1
3 3 247 250 3 2
3 2 250 250 2 2
then findContours() will just outline the whole thing, since to it it's equivalent to all being 255 (they're all > 0).
All you need to do is binarize the image with something like threshold() or inRange(), so that your image actually comes out to
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
0 0 255 255 0 0
Then you'd correctly get the outline of the 255 block in the center.

Java Equivalent OpenCV Code to this C++ Code

Can anybody tell me the correct java code for this c++ code snippet:
output.at(x, y) = target.at(dx, dy);
I have tried this java code and they are displacing pixel but not showing image clearly :
output.put(x, y, target.get(dx, dy));
For one channel images e.g. grey-scale; 0 ~ 255
Getting a pixel value
double pixelValue = image.get(i,j)[0];
Setting a pixel value
image.put(i,j,230);
For 3 channel images e.g. RGB (3 values 0 ~ 255)
Getting a pixel (double[] array will have 3 values)
double[] pixelValue = image.get(i,j);
Setting a pixel with 3 RGB values
image.put(i,j,255,250,100); // yellow color

Hue and Saturation range in openCV seems conflicting

As far as i know, the hue and saturation range are 0 to 180 and 0 to 255 for hue and saturation respectively.
But in histogram comparison exmaple in openCV docs, they have taken the following:
// hue varies from 0 to 256, saturation from 0 to 180
float h_ranges[] = { 0, 256 };
float s_ranges[] = { 0, 180 };
Shouldn't it be the reversed case?
yes, you're right. it's a bug.
// hue varies from 0 to 180, saturation from 0 to 256
float h_ranges[] = { 0, 180 };
float s_ranges[] = { 0, 256 };
(the sample in cpp/tutorials does the right thing actually)
[edit] will be fixed soon.

Filter for red hue - emgucv/opencv

How do I filter an image for red hue? I understand that red lies around zero between 330° and 30° (represented by 165 to 15 in OpenCV?). How can I use that range with the InRange method as there is an overflow at 360° (180 in OpenCV)?
Im detecting HUE colour using the following code:
Mat img_hsv, dst ;
cap >> image;
cvtColor(image, img_hsv, CV_RGB2HSV);
inRange(img_hsv, Scalar(110, 130, 100), Scalar(140, 255, 255), dst );
where dst is Mat of the same size as img_hsv and CV_8U type.
And your scalars determine the filtered colour. In my case its:
HUE from 110 to 140
SAT from 130 to 255
VAL from 100 to 255
more info here:
OpenCV 2.4 InRange()
I'm not sure about using a hue that overflows the 180 range but I think you can calculate them separately and then add the resulting Mats.

Resources