Finding the median value of an RGB image in OpenCV? - opencv

Is there any easy way of finding the median value of a RGB image in OpenCV using C?
In MATLAB we can just extract the arrays corresponding to the three channels and compute median values for each of the arrays by median(median(array)). Finally, the median value of these three medians (for three channels) can be calculated for the final median value.

You can convert the matrix to a histogram via the calcHist function (once for each channel), then calculate the median for a given channel by using the function available here.
Note: I have not tested that linked code, but it should at least give you an idea of how to get started.

Related

How to Compute RGB Image Standard Deviation from per Channel values

Is there any way how one can compute RGB image standard deviation from the channel values?
What I mean is: I have standard deviation values for each channel - this is easily calculated with cv::meanStdDev() function. What I'd like is to have one single stddev value of the whole image.

Are Wavelet Coefficients simply the pixel values of decomposed image in 2D Discrete Wavelet Transform

I've been working with Discrete Wavelet Transform, I'm new to this theory. I want to access and modify the wavelet coefficients of the decomposed image, Are those wavelet coefficients simply the pixel values of the decomposed image in 2D DWT?
This is for example the result of DWT Decomposition:
So, when I want to access and modify the Wavelet Coefficients, can I just iterate through the pixel values of above image? Thank you for your help.
No. The image is merely illustrative.
The image you are looking on does not exactly correspond to original coefficients. The original wavelet coefficients are real numbers. Unlike them, you are looking on their absolute values quantized into a range from 0 to 255.
It is not true that the coefficients were calculated as pairwise sums and differences of the input samples. The coefficients were calculated using two complementary filters. See the description here. However, it is essential that these coefficients were adjusted and it is no longer possible to synthesize the original image. If you need to synthesize the image, you cannot access the pixels of the referenced image.

Image Smoothing using map-reduce

I have an image whose RGB values for each pixel is stored in a 2-D array. Assuming I want to apply, a basic 3X3 averaging algorithm for smoothing the image. How can I implement such an algorithm using map-reduce paradigm.
Thanks in advance.
This took me a while to think in map reduce paradigm but anyways here it is -
Map Task
Input - (x-coordinate,y-coordinate,RGB value)
Output - 9 tuples which are these {(x,y,RGB),(x-1,y,RGB),(x-1,y-1,RGB),(x,y-1,RGB),(x+1,y-1,RGB),(x+1,y,RGB),(x-1,y+1,RGB),(x,y+1,RGB),(x+1,y+1,RGB)}
Reduce Task
The framework will sort all these tuples based on the keys(x-coordinate,y-coordinate) and rearrange them.So now for each pixel you have 9 RGB values of it's neighboring pixels. We simply average them in the reduce task and output a tuple ----> (x,y,avg_RGB)
So, basically instead of each pixel telling the RGB values of all its neighboring pixels for itself, it broadcasts it's own RGB value as pixel value of it's neighbors.
Hope this helps :)

Feature dimension of RGB color histogram?

I am unsure about this but I want to compute features around interest points computed by surf using RGB color Histogram. I guess the final feature will be 256 dimensional long. However, I am unsure if this is correct.
The dimension of the RGB color histogram is determined by how many bins you use for each channel. The dimension will be 24 (8+8+8) if you use 8 bins for each of them.

1D histogram opencv with double values

I'm trying to create an histogram using opencv. I have an image (32 bit) that came out from a blurring operation, so I just know that the values are in the range [-0.5; 0.5] but I don't know anything else about the starting data.
the problem is that I don't understand how to set the parameters to compute such histogram.
the code I wrote is:
int numbins=1000;
float range[]={-0.5, 0.5};
float *ranges[]={range};
CvHistogram *hist=cvCreateHist(1, &numbins, CV_HIST_ARRAY, ranges, 1);
cvCalcHist(&img, hist);
were img is the image I want to get the histogram. if I try to print the histogram I just get a black picture, while with the same function I get a correct histogram if use a grayscale 8bit image.
Have you looked at the calcHist example? Also, the camshiftdemo makes heavy use of histograms.
Are you normalizing the histogram output with normalize before display (camshiftdemo shows how to do this)? Values near 0 will appear black when displayed, but when normalized between say 0 and 255 will show up nicely.

Resources