How to change saturation values with opencv? - opencv

In order to add constant value to each pixel's saturation value, I do this in double loops. I wonder if there is any simpler and faster command achieving this.

Mat img(200, 300, CV_8UC1);
Mat saturated;
double saturation = 10;
double scale = 1;
// what it does here is dst = (uchar) ((double)src*scale+saturation);
img.convertTo(saturated, CV_8UC1, scale, saturation);
EDIT
If by saturation, you mean the S channel in a HSV image, you need to separe your image in three channels with split(), apply the saturation correction to the S channel, and then put them together with merge().

For the experiments I attempted, the alternative method of splitting hsv values, adjusting the individual channels and then doing a merge gave a better performance. Below is what worked for me many times faster as compared to looping through pixels:
(h, s, v) = cv2.split(imghsv)
s = s*satadj
s = np.clip(s,0,255)
imghsv = cv2.merge([h,s,v])
Note that I had converted the values to float32 during BGR2HSV transformation to avoid negative values during saturation transformation to due uint8 (default) overflow:
imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype("float32")
And converted it back to default uint8 after my saturation adjustment:
imgrgb = cv2.cvtColor(imghsv.astype("uint8"), cv2.COLOR_HSV2BGR)

Related

opencv binary threshold with C API: set threshold very small will filter out most pixel

Here is the picture in grayscale mode:
if I apply a Thresholding and set threshold to 0. According to my understanding, the thesholded image will be mostly white. but the result is opposite.
Result is:
I also tried this:
build a image and set all pixel to 255. then apply the 0 threshold thresholding, the returned image is all 255.
The question is:
in the picture is mostly zero (black) after apply thresholding.
Here are the code:
IplImage* g_image = NULL;
IplImage* g_gray = NULL;
int g_thresh = 100;
CvMemStorage* g_storage = NULL;
void on_tracker(int){
if(g_storage == NULL){
g_gray = cvCreateImage(cvGetSize(g_image), 8, 1);
g_storage = cvCreateMemStorage(0);
}else{
cvClearMemStorage(g_storage);
}
CvSeq* contours = 0;
cvCvtColor(g_image, g_gray, CV_BGR2GRAY);
cvNamedWindow("Gray");
cvShowImage("Gray", g_gray);
cvThreshold(g_gray, g_gray, g_thresh, 255, CV_THRESH_BINARY);
cvFindContours(g_gray, g_storage, &contours);
cvShowImage("Contours", g_gray);
}
int main(int argc, char** argv){
if( argc !=2 || !(g_image = cvLoadImage(argv[1]))){
return -1;
}
cvNamedWindow("Contours", CV_WINDOW_AUTOSIZE);
cvCreateTrackbar(
"Threshold",
"Contours",
&g_thresh,
255,
on_tracker
);
on_tracker(0);
cvWaitKey();
return 0;
}
Have a read of the different types of thresholding available to you in the documentation.
Starting with a 1D 'image' with a range of values (the black line) and threshold (the blue line):
...we can visualise the outcome of the different modes:
Threshold Binary
Threshold Binary Inverted
Truncate
Threshold to Zero
Threshold to Zero Inverted
Please update your question with your code so we know what mode you're using if this answer doesn't help already ;)
The basic Thresholding is to check the pixels value (say from 0 to 255) to be above the Threshold value and to assign to the pixel a value of maximum value (high intensity: black) this called Binary Thresholding.
In your case, when setting a value of 0 to the threshold, you actually filtering all your pixels since all of them (the low intensities and the higher intensities) have values above zero (0).
Maybe you would like to make a brighter picture - in this case use Inverted Binary Thresholding: in this case, you will get white picture when value is 0.
Accoring to #Miki's comments. this is caused by C API. I tried the same process with python API. the result is normal:
if I do thresholding with 0 threshold, most of pixel will be set to 255.

How to extract dominant color from CIAreaHistogram?

I am looking to analyze the most dominant color in a UIImage on iOS (color present in the most pixels) and I stumbled upon Core Image's filter based API, particularly CIAreaHistogram.
It seems like this filter could probably help me but I am struggling to understand the API. Firstly it says the output of the filter is a one-dimensional image which is the length of your input-bins and one pixel in height. How do I read this data? I basically want to figure out the color-value with the highest frequency so I am expecting the data to contain some kind of frequency count for each color, its not clear to me how this one-dimensional image would represent that because it does not really explain the data I can expect inside this 1-d image. And if its truly a histogram why would it not return a data-structure representing that like a dictionary
Second, in the API it asks for a number of bins? What should that input be? If I want an exact analysis would the input bin parameter be the color-space of my image? What does making the bin value smaller do, I would imagine it just approximates nearby colors via Euclidean distance to the nearest bin. If this is the case will that not yield exact histogram results, why would anyone want to do that?
Any input on the above two questions from an API perspective would help me greatly
Ian Ollmann's idea of calculating the histogram just for the hue is really neat and can be done with a simple color kernel. This kernel returns a monochrome image of just the hue of an image (based on this original work)
let shaderString = "kernel vec4 kernelFunc(__sample c)" +
"{" +
" vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);" +
" vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));" +
" vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));" +
" float d = q.x - min(q.w, q.y);" +
" float e = 1.0e-10;" +
" vec3 hsv = vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);" +
" return vec4(vec3(hsv.r), 1.0);" +
"}"
let colorKernel = CIColorKernel(string: shaderString)
If I get the hue of an image of a blue sky, the resulting histogram looks like this:
...while a warm sunset gives a histogram like this:
So, that looks like a good technique to get the dominant hue of an image.
Simon
CIAreaHistogram returns an image where the reg, green, blue and alpha values of each of the pixels indicates the frequency of that tone in the image. You can render that image to an array of UInt8 to look at the histogram data. There's also an undocumented outputData value:
let filter = CIFilter(
name: "CIAreaHistogram",
withInputParameters: [kCIInputImageKey: image])!
let histogramData = filter.valueForKey("outputData")
However, I've found vImage to be a better framework for working with histograms. First off, you need to create a vImage image format:
var format = vImage_CGImageFormat(
bitsPerComponent: 8,
bitsPerPixel: 32,
colorSpace: nil,
bitmapInfo: CGBitmapInfo(
rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue),
version: 0,
decode: nil,
renderingIntent: .RenderingIntentDefault)
vImage works with image buffers that can be created from CGImage rather than CIImage instances (you can create one with the createCGImage method of CIContext. vImageBuffer_InitWithCGImage will create an image buffer:
var inBuffer: vImage_Buffer = vImage_Buffer()
vImageBuffer_InitWithCGImage(
&inBuffer,
&format,
nil,
imageRef,
UInt32(kvImageNoFlags))
Now to create arrays of Uint which will hold the histogram values for the four channels:
let red = [UInt](count: 256, repeatedValue: 0)
let green = [UInt](count: 256, repeatedValue: 0)
let blue = [UInt](count: 256, repeatedValue: 0)
let alpha = [UInt](count: 256, repeatedValue: 0)
let redPtr = UnsafeMutablePointer<vImagePixelCount>(red)
let greenPtr = UnsafeMutablePointer<vImagePixelCount>(green)
let bluePtr = UnsafeMutablePointer<vImagePixelCount>(blue)
let alphaPtr = UnsafeMutablePointer<vImagePixelCount>(alpha)
let rgba = [redPtr, greenPtr, bluePtr, alphaPtr]
let histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>>(rgba)
The final step is to perform the calculation, which will populate the four arrays, and free the buffer's data:
vImageHistogramCalculation_ARGB8888(&inBuffer, histogram, UInt32(kvImageNoFlags))
free(inBuffer.data)
A quick check of the alpha array of an opaque image should yield 255 zeros with the final value corresponding to the number of pixels in the image:
print(alpha) // [0, 0, 0, 0, 0 ... 409600]
A histogram won't give you the dominant color from a visual perspective: an image which is half yellow {1,1,0} and half black {0,0,0} will give the same results as an image which is half red {1,0,0} and held green {0,1,0}.
Hope this helps,
Simon
One problem with the histogram approach is that you lose correlation between the color channels. That is, half your image could be magenta and half yellow. You will find a red histogram that is all in the 1.0 bin, but the blue and green bins would be evenly split between 0.0 and 1.0 with nothing in between. Even though you can be quite sure that red is bright, you won't be able to say much about what the blue and green component should be for the "predominant color"
You could use a 3D histogram with 2**(8+8+8) bins, but this is quite large and you will find the signal is quite sparse. By happenstance three pixels might land in one bin and have no two the same elsewhere, even though many users could tell you that there is a predominant color and it has nothing to do with that pixel.
You could make the 3D histogram a lot lower resolution and have (for example) just 16 bins per color channel. It is much more likely that bins will have a statistically meaningful population count this way. This should give you a starting point to find a mean for a local population of pixels in that bin. If each bin had a count and a {R,G,B} sum, then you could quickly find the mean color for pixels in that bin once you had identified the most popular bins. This method is still subject to some influence from the histogram grid. You will be more likely to identify colors in the middle of a grid cell than at the edges. Populations may span multiple grid cells. Something like kmeans might be another method.
If you just want predominant hue, then conversion to a color space like HSV followed by a histogram of hue would work.
I'm not aware of any filters in vImage, CI or MetalPerformanceShaders to do these things for you. You can certainly write code in either the CPU or Metal to do it without a lot of trouble.

OpenCV - Gaussian Noise

here's my problem: I'm trying to create a simple program which adds Gaussian noise to an input image. The only constraints are that the input image is of type CV_64F (i.e. double) and the values are and must be kept normalized between 0 and 1.
The code I wrote is the following:
Mat my_noise;
my_ noise = Mat (input.size(), input.type());
randn(noise, 0, 5); //mean and variance
input += noise;
The above code doesn't work, the resulting image doesn't get displayed properly. I think that happens because it gets out of the 0,1 range. I modified the code like this:
Mat my_noise;
my_ noise = Mat (input.size(), input.type());
randn(noise, 0, 5); //mean and variance
input += noise;
normalize(input, input, 0.0, 1.0, CV_MINMAX, CV_64F);
but it still doesn't work. Again, the resulting image doesn't get displayed properly. Where is the problem? Remember: the input image is of type CV_64F and the values are normalized between 0 and 1 before adding noise and have to remain like also after the noise addition.
Thank you in advance.
Your problem is that Gaussian noise can have arbitrary amplitude and can't be represented in [0, 1]. Renormalizing after adding the noise is a mistake, because just one large noise value could affect the whole image.
Probably what you need to do is saturate the image when adding the noise, values that would be greater than 1.0 are clamped to 1.0, and values that would be less than 0.0 are clamped to 0.0.
Something like
cv::Mat noise(input.size(), input.type());
cv::randn(noise, 0, 5); //mean and variance
input += noise;
cv::Mat clamp_1 = cv::Mat::ones(input.size(), input.type());
cv::Mat clamp_0 = cv::Mat::zeros(input.size(), input.type());
input = cv::max(input, clamp_0);
input = cv::min(input, clamp_1);
Also a noise variance of 5 is very large, it means that there is about a 92% chance that the input + noise will be outside the range [0, 1], assuming the input is uniformly distributed on [0, 1]. So your saturated image will be mostly black and white, with the input image having little effect on the result.

comparing blob detection and Structural Analysis and Shape Descriptors in opencv

I need to use blob detection and Structural Analysis and Shape Descriptors (more specifically findContours, drawContours and moments) to detect colored circles in an image. I need to know the pros and cons of each method and which method is better. Can anyone show me the differences between those 2 methods please?
As #scap3y suggested in the comments I'd go for a much simpler approach. What I'm always doing in these cases is something similar to this:
// Convert your image to HSV color space
Mat hsv;
hsv.create(originalImage.size(), CV_8UC3);
cvtColor(originalImage,hsv,CV_RGB2HSV);
// Chose the range in each of hue, saturation and value and threshold the other pixels
Mat thresholded;
uchar loH = 130, hiH = 170;
uchar loS = 40, hiS = 255;
uchar loV = 40, hiV = 255;
inRange(hsv, Scalar(loH, loS, loV), Scalar(hiH, hiS, hiV), thresholded);
// Find contours in the image (additional step could be to
// apply morphologyEx() first)
vector<vector<Point>> contours;
findContours(thresholded,contours,CV_RETR_EXTERNAL,CHAIN_APPROX_SIMPLE);
// Draw your contours as ellipses into the original image
for(i=0;i<(int)valuable_rectangle_indices.size();i++) {
rect=minAreaRect(contours[valuable_rectangle_indices[i]]);
ellipse(originalImage, rect, Scalar(0,0,255)); // draw ellipse
}
The only thing left for you to do now is to figure out in what range your markers are in HSV color space.

color a grayscale image with opencv

i'm using openNI for some project with kinect sensor. i'd like to color the users pixels given with the depth map. now i have pixels that goes from white to black, but i want from red to black. i've tried with alpha blending, but my result is only that i have pixels from pink to black because i add (with addWeight) red+white = pink.
this is my actual code:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC3, Scalar(255,0,0));
Mat red_body; // = Mat::zeros(240,320, CV_8UC3);
red.copyTo(red_body, device.getUserMask());
addWeighted(red_body, 0.8, layers, 0.5, 0.0, layers);
where device.getDepth() returns a cv::Mat with depth map and device.getUserMask() returns a cv::Mat with user pixels (only white pixels)
some advice?
EDIT:
one more thing:
thanks to sammy answer i've done it. but actually i don't have values exactly from 0 to 255, but from (for example) 123-220.
i'm going to find minimum and maximum via a simple for loop (are there better way?), and how can i map my values from min-max to 0-255 ?
First, OpenCV's default color format is BGR not RGB. So, your code for creating the red image should be
Mat red = Mat(240,320, CV_8UC3, Scalar(0,0,255));
For red to black color map, you can use element wise multiplication instead of alpha blending
Mat out = red_body.mul(layers, 1.0/255);
You can find the min and max values of a matrix M using
double minVal, maxVal;
minMaxLoc(M, &minVal, &maxVal, 0, 0);
You can then subtract the minValue and scale with a factor
double factor = 255.0/(maxVal - minVal);
M = factor*(M -minValue)
Kinda clumsy and slow, but maybe split layers, copy red_body (make it a one channel Mat, not 3) to the red channel, merge them back into layers?
Get the same effect, but much faster (in place) with reshape:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC1, Scalar(255)); // One channel
Mat red_body;
red.copyTo(red_body, device.getUserMask());
Mat flatLayer = layers.reshape(1,240*320); // presumed dimensions of layer
red_body.reshape(0,240*320).copyTo(flatLayer.col(0));
// layers now has the red from red_body

Resources