Related
The Google Maps iOS SDK's heat map (more specifically the Google-Maps-iOS-Utils framework) decides the color to render an area in essentially by calculating the density of the points in that area.
However, I would like to instead select the color based on the average weight or intensity of the points in that area.
From what I understand, this behavior is not built in (but who knows––the documentation sort of sucks). The file where the color-picking is decided is I think in /src/Heatmap/GMUHeatmapTileLayer.mThis is a relatively short file, but I am not very well versed in Objective-C, so I am having some difficulty figuring out what does what. I think -tileForX:y:zoom: in GMUHeatmapTileLayer.m is the important function, but I'm not sure and even if it is, I don't quite know how to modify it. Towards the end of this method, the data is 'convolved' first horizontally and then vertically. I think this is where the intensities are actually calculated. Unfortunately, I do not know exactly what it's doing, and I am afraid of changing things because I suck at obj-c. This is what the convolve parts of this method look like:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
// ...
// Convolve data.
int lowerLimit = (int)data->_radius;
int upperLimit = paddedTileSize - (int)data->_radius - 1;
// Convolve horizontally first.
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
float value = intensity[y * paddedTileSize + x];
if (value != 0) {
// convolve to x +/- radius bounded by the limit we care about.
int start = MAX(lowerLimit, x - (int)data->_radius);
int end = MIN(upperLimit, x + (int)data->_radius);
for (int x2 = start; x2 <= end; x2++) {
float scaledKernel = value * [data->_kernel[x2 - x + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
intermediate[y * paddedTileSize + x2] += scaledKernel;
// ^
}
}
}
}
free(intensity);
// Convole vertically to get final intensity.
float *finalIntensity = calloc(kGMUTileSize * kGMUTileSize, sizeof(float));
for (int x = lowerLimit; x <= upperLimit; x++) {
for (int y = 0; y < paddedTileSize; y++) {
float value = intermediate[y * paddedTileSize + x];
if (value != 0) {
int start = MAX(lowerLimit, y - (int)data->_radius);
int end = MIN(upperLimit, y + (int)data->_radius);
for (int y2 = start; y2 <= end; y2++) {
float scaledKernel = value * [data->_kernel[y2 - y + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// ^
}
}
}
}
free(intermediate);
// ...
}
This is the method where the intensities are calculated for each iteration, right? If so, how can I change this to achieve my desired effect (average, not summative colors, which I think are proportional to intensity).
So: How can I have averaged instead of summed intensities by modifying the framework?
I think you are on the right track. To calculate average you divide the point sum by the point count. Since you already have the sums calculated, I think an easy solution would be to also save the count for each point. If I understand it correctly, this it what you have to do.
When allocating memory for the sums also allocate memory for the counts
// At this place
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
// Add this line, calloc will initialize them to zero
int *counts = calloc(paddedTileSize * paddedTileSize, sizeof(int));
Then increase the count in each loop.
// Below this line (first loop)
intermediate[y * paddedTileSize + x2] += scaledKernel;
// Add this
counts[y * paddedTileSize + x2]++;
// And below this line (second loop)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// Add this
counts[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit]++;
After the two loops you should have two arrays, one with your sums finalIntensity and one with your counts counts. Now go through the values and calculate the averages.
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
int n = y * paddedTileSize + x;
if (counts[n] != 0)
finalIntensity[n] = finalIntensity[n] / counts[n];
}
}
free(counts);
The finalIntensity should now contain your averages.
If you prefer, and the rest of the code makes it possible, you can skip the last loop and instead do the division when using the final intensity values. Just change any subsequent finalIntensity[n] to counts[n] == 0 ? finalIntensity[n] : finalIntensity[n] / counts[n].
I may have just solved the same issue for the java version.
My problem was having a custom gradient with 12 different values.
But my actual weighted data does not necessarily contain all intensity values from 1 to 12.
The problem is, the highest intensity value gets mapped to the highest color.
Also 10 datapoints with intensity 1 that are close by will get the same color as a single point with intensity 12.
So the function where the tile gets created is a good starting point:
Java:
public Tile getTile(int x, int y, int zoom) {
// ...
// Quantize points
int dim = TILE_DIM + mRadius * 2;
double[][] intensity = new double[dim][dim];
int[][] count = new int[dim][dim];
for (WeightedLatLng w : points) {
Point p = w.getPoint();
int bucketX = (int) ((p.x - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
// Quantize wraparound points (taking xOffset into account)
for (WeightedLatLng w : wrappedPoints) {
Point p = w.getPoint();
int bucketX = (int) ((p.x + xOffset - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
for(int bx = 0; bx < dim; bx++)
for (int by = 0; by < dim; by++)
if (count[bx][by] != 0)
intensity[bx][by] /= count[bx][by];
//...
I added a counter and count every addition to the intensities, after that I go through every intensity and calculate the average.
For C:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
//...
// Quantize points.
int paddedTileSize = kGMUTileSize + 2 * (int)data->_radius;
float *intensity = calloc(paddedTileSize * paddedTileSize, sizeof(float));
int *count = calloc(paddedTileSize * paddedTileSize, sizeof(int));
for (GMUWeightedLatLng *item in points) {
GQTPoint p = [item point];
int x = (int)((p.x - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for (GMUWeightedLatLng *item in wrappedPoints) {
GQTPoint p = [item point];
int x = (int)((p.x + wrappedPointsOffset - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
// For wrapped points, additional shifting risks bucketing slipping just outside due to
// numerical instability.
if (x < 0) x = 0;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for(int i=0; i < paddedTileSize * paddedTileSize; i++)
if (count[i] != 0)
intensity[i] /= count[i];
Next is the convolving.
What I did there, is to make sure that the calculated value does not go over the maximum in my data.
Java:
// Convolve it ("smoothen" it out)
double[][] convolved = convolve(intensity, mKernel, mMaxAverage);
// the mMaxAverage gets set here:
public void setWeightedData(Collection<WeightedLatLng> data) {
// ...
// Add points to quad tree
for (WeightedLatLng l : mData) {
mTree.add(l);
mMaxAverage = Math.max(l.getIntensity(), mMaxAverage);
}
// ...
// And finally the convolve method:
static double[][] convolve(double[][] grid, double[] kernel, double max) {
// ...
intermediate[x2][y] += val * kernel[x2 - (x - radius)];
if (intermediate[x2][y] > max) intermediate[x2][y] = max;
// ...
outputGrid[x - radius][y2 - radius] += val * kernel[y2 - (y - radius)];
if (outputGrid[x - radius][y2 - radius] > max ) outputGrid[x - radius][y2 - radius] = max;
For C:
// To get the maximum average you could do that here:
- (void)setWeightedData:(NSArray<GMUWeightedLatLng *> *)weightedData {
_weightedData = [weightedData copy];
for (GMUWeightedLatLng *dataPoint in _weightedData)
_maxAverage = Math.max(dataPoint.intensity, _maxAverage)
// ...
// And then simply in the convolve section
intermediate[y * paddedTileSize + x2] += scaledKernel;
if (intermediate[y * paddedTileSize + x2] > _maxAverage)
intermediate[y * paddedTileSize + x2] = _maxAverage;
// ...
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
if (finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] > _maxAverage)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] = _maxAverage;
And finally the coloring
Java:
// The maximum intensity is simply the size of my gradient colors array (or the starting points)
Bitmap bitmap = colorize(convolved, mColorMap, mGradient.mStartPoints.length);
For C:
// Generate coloring
// ...
float max = [data->_maxIntensities[zoom] floatValue];
max = _gradient.startPoints.count;
I did this in Java and it worked for me, not sure about the C-code though.
You have to play around with the radius and you could even edit the kernel. Because I found that when I have a lot of homogeneous data (i.e. little variation in the intensities, or a lot of data in general) the heat map will degenerate to a one-colored overlay, because the gradient on the edges will get smaller and smaller.
But hope this helps anyway.
// Erik
I have written a filter for image blurring in C and it's working fine, I am trying to run in on GPU using CUDA C for faster processing. The program has a few if and else conditions as can be seen below for the C code version,
The input to the function being input image, output image, and size of columns.
void convolve_young1D(double * in, double * out, int datasize) {
int i, j;
/* Compute first 3 output elements */
out[0] = B*in[0];
out[1] = B*in[1] + bf[2]*out[0];
out[2] = B*in[2] + (bf[1]*out[0]+bf[2]*out[1]);
/* Recursive computation of output in forward direction using filter parameters bf and B */
for (i=3; i<datasize; i++) {
out[i] = B*in[i];
for (j=0; j<3; j++) {
out[i] += bf[j]*out[i-(3-j)];
}
}
}
//Calling function below
void convolve_young2D(int rows, int columns, int sigma, double ** ip_padded) {
/** \brief Filter radius */
w = 3*sigma;
/** \brief Filter parameter q */
double q;
if (sigma < 2.5)
q = 3.97156 - 4.14554*sqrt(1-0.26891*sigma);
else
q = 0.98711*sigma - 0.9633;
/** \brief Filter parameters b0, b1, b2, b3 */
double b0 = 1.57825 + 2.44413*q + 1.4281*q*q + 0.422205*q*q*q;
double b1 = 2.44413*q + 2.85619*q*q + 1.26661*q*q*q;
double b2 = -(1.4281*q*q + 1.26661*q*q*q);
double b3 = 0.422205*q*q*q;
/** \brief Filter parameters bf, bb, B */
bf[0] = b3/b0; bf[1] = b2/b0; bf[2] = b1/b0;
bb[0] = b1/b0; bb[1] = b2/b0; bb[2] = b3/b0;
B = 1 - (b1+b2+b3)/b0;
int i,j;
/* Convolve each row with 1D Gaussian filter */
double *out_t = calloc(columns+(2*w),sizeof(double ));
for (i=0; i<rows+2*w; i++) {
convolve_young1D(ip_padded[i], out_t, columns+2*w);
}
free(out_t);
Tried the same approach with blocks and threads in CUDA C but wasn't successful I have been getting zeros as output and even the input values seem to change to Zeros don't know where I am going wrong please do help. I am pretty new to CUDA C programming. Here is my attempted version of the CUDA Kernel.
__global__ void convolve_young2D( float *in, float *out,int rows,int columns, int j,float B,float bf[3],int w) {
int k;
int x = blockIdx.x * blockDim.x + threadIdx.x;
if((x>0) && (x<(rows+2*w)))
{
//printf("%d \t",x);
if(j ==0)
{
// Compute first output elements
out[x*columns] = B*in[x*columns];
}
else if(j==1)
{
out[x*columns +1 ] = B*in[x*columns +1] + bf[2]*out[x*columns];
}
else if (j== 2)
{
out[2] = B*in[x*columns +2] + (bf[1]*out[x*columns]+bf[2]*out[x*columns+1]);
}
else{
// Recursive computation of output in forward direction using filter parameters bf and B
out[x*columns+j] = B*in[x*columns+j];
for (k=0; k<3; k++) {
out[x*columns + j] += bf[k]*out[(x*columns+j)-(3-k)];
}
}
}
}
//Calling function below
void convolve_young2D(int rows, int columns, int sigma, const float * const ip_padded, float * const op_padded) {
float bf[3], bb[3];
float B;
int w;
/** \brief Filter radius */
w = 3*sigma;
/** \brief Filter parameter q */
float q;
if (sigma < 2.5)
q = 3.97156 - 4.14554*sqrt(1-0.26891*sigma);
else
q = 0.98711*sigma - 0.9633;
/** \brief Filter parameters b0, b1, b2, b3 */
float b0 = 1.57825 + 2.44413*q + 1.4281*q*q + 0.422205*q*q*q;
float b1 = 2.44413*q + 2.85619*q*q + 1.26661*q*q*q;
float b2 = -(1.4281*q*q + 1.26661*q*q*q);
float b3 = 0.422205*q*q*q;
/** \brief Filter parameters bf, bb, B */
bf[0] = b3/b0; bf[1] = b2/b0; bf[2] = b1/b0;
bb[0] = b1/b0; bb[1] = b2/b0; bb[2] = b3/b0;
B = 1 - (b1+b2+b3)/b0;
int p;
const int inputBytes = (rows+2*w) * (columns+2*w) * sizeof(float);
float *d_input, *d_output; // arrays in the GPU´s global memory
cudaMalloc(&d_input, inputBytes);
cudaMemcpy(d_input, ip_padded, inputBytes, cudaMemcpyHostToDevice);
cudaMalloc(&d_output,inputBytes);
for (p = 0; p<columns+2*w; p++){
convolve_young<<<4,500>>>(d_input,d_output,rows,columns,p,B,bf,w);
}
cudaMemcpy(op_padded, d_input, inputBytes, cudaMemcpyDeviceToHost);
cudaFree(d_input);
The first problem is that you call convolve_young<<<4,500>>>(d_input,d_output,rows,columns,p,B,bf,w); but you defined a kernel named convolve_young2D.
Another possible problem is that to do the convolution you do:
for (p = 0; p<columns+2*w; p++){
convolve_young<<<4,500>>>(d_input,d_output,rows,columns,p,B,bf,w);
}
Here you're looping over the columns instead of the rows compared to the CPU algorithm:
for (i=0; i<rows+2*w; i++) {
convolve_young1D(ip_padded[i], out_t, columns+2*w);
}
First you should try to do a direct port of your CPU algorithm, computing one line at the time, and then modify it to transfer the whole image.
I need to change size of bubbles(points) by supplying 4th value in data points in Highcharts' 3D scatter chart. I couldn't find any way how to do this. Can anyone help ?
It seems that it is not supported out of the box. Although in this Thread in the Highcharts-Forum, a wrapper is shown that allows a 4th w value to be used as the size of the bubble (see http://jsfiddle.net/uqLfm1k6/1/):
(function (H) {
H.wrap(H.seriesTypes.bubble.prototype, 'getRadii', function (proceed, zMin, zMax, minSize, maxSize) {
var math = Math,
len,
i,
pos,
zData = this.zData,
wData = this.userOptions.data.map( function(e){ return e.w }), // ADDED
radii = [],
options = this.options,
sizeByArea = options.sizeBy !== 'width',
zThreshold = options.zThreshold,
zRange = zMax - zMin,
value,
radius;
// Set the shape type and arguments to be picked up in drawPoints
for (i = 0, len = zData.length; i < len; i++) {
// value = zData[i]; // DELETED
value = this.chart.is3d()? wData[i] : zData[i]; // ADDED
// When sizing by threshold, the absolute value of z determines the size
// of the bubble.
if (options.sizeByAbsoluteValue && value !== null) {
value = Math.abs(value - zThreshold);
zMax = Math.max(zMax - zThreshold, Math.abs(zMin - zThreshold));
zMin = 0;
}
if (value === null) {
radius = null;
// Issue #4419 - if value is less than zMin, push a radius that's always smaller than the minimum size
} else if (value < zMin) {
radius = minSize / 2 - 1;
} else {
// Relative size, a number between 0 and 1
pos = zRange > 0 ? (value - zMin) / zRange : 0.5;
if (sizeByArea && pos >= 0) {
pos = Math.sqrt(pos);
}
radius = math.ceil(minSize + pos * (maxSize - minSize)) / 2;
}
radii.push(radius);
}
this.radii = radii;
});
}(Highcharts));
I want to filter a pix with a convolution kernel but with a bias and i don't see how to "emulate" the Bias using Leptonica API.
So far i have:
PIX* pixs = pixRead("file.png");
L_KERNEL* kel = kernelCreatFromString( 7, 7, 3, 3, "..." );
PIX* pixd = pixConvolve( pixs, kel, 8, 1 );
Any ideas how to emulate the classical "Bias"? I tried to add it's value it to each pixel of the image before or after the pixConvolve but the result is not the one observed with most image processing software.
By "bias", I am assuming that you want to shift the result so that all pixel values are non-negative.
In the notes for pixConvolve(), it says that the absolute value is taken to avoid negative output. It also says that if you wish to keep the negative values, use fpixConvolve() instead, which operates on an FPix and generates an FPix.
If you want a biased result without clipping, it is in general necessary to do the following:
(1) pixConvertToFpix() -- convert to an FPix
(2) fpixConvolve() -- do the convolution on the FPix, producing an FPix
(3) fpixGetMin() -- determine the bias required to make all values nonzero
(4) fpixAddMultConstant() -- add the bias to the FPix
(5) fpixGetMax() -- find the max value; if > 255, you need a 16 bpp Pix to represent it
(6) fpixConvertToPix -- convert back to a pix
Perhaps the leptonica maintainer (me) should bundle this up into a simple interface ;-)
OK, here's a function, following the outline that I wrote above, that should give enough flexibility to do these convolutions.
/*!
* pixConvolveWithBias()
* Input: pixs (8 bpp; no colormap)
* kel1
* kel2 (can be null; use if separable)
* force8 (if 1, force output to 8 bpp; otherwise, determine
* output depth by the dynamic range of pixel values)
* &bias (<return> applied bias)
* Return: pixd (8 or 16 bpp)
*
* Notes:
* (1) This does a convolution with either a single kernel or
* a pair of separable kernels, and automatically applies whatever
* bias (shift) is required so that the resulting pixel values
* are non-negative.
* (2) If there are no negative values in the kernel, a normalized
* convolution is performed, with 8 bpp output.
* (3) If there are negative values in the kernel, the pix is
* converted to an fpix, the convolution is done on the fpix, and
* a bias (shift) may need to be applied.
* (4) If force8 == TRUE and the range of values after the convolution
* is > 255, the output values will be scaled to fit in
* [0 ... 255].
* If force8 == FALSE, the output will be either 8 or 16 bpp,
* to accommodate the dynamic range of output values without
* scaling.
*/
PIX *
pixConvolveWithBias(PIX *pixs,
L_KERNEL *kel1,
L_KERNEL *kel2,
l_int32 force8,
l_int32 *pbias)
{
l_int32 outdepth;
l_float32 min1, min2, min, minval, maxval, range;
FPIX *fpix1, *fpix2;
PIX *pixd;
PROCNAME("pixConvolveWithBias");
if (!pixs || pixGetDepth(pixs) != 8)
return (PIX *)ERROR_PTR("pixs undefined or not 8 bpp", procName, NULL);
if (pixGetColormap(pixs))
return (PIX *)ERROR_PTR("pixs has colormap", procName, NULL);
if (!kel1)
return (PIX *)ERROR_PTR("kel1 not defined", procName, NULL);
/* Determine if negative values can be produced in convolution */
kernelGetMinMax(kel1, &min1, NULL);
min2 = 0.0;
if (kel2)
kernelGetMinMax(kel2, &min2, NULL);
min = L_MIN(min1, min2);
if (min >= 0.0) {
if (!kel2)
return pixConvolve(pixs, kel1, 8, 1);
else
return pixConvolveSep(pixs, kel1, kel2, 8, 1);
}
/* Bias may need to be applied; convert to fpix and convolve */
fpix1 = pixConvertToFPix(pixs, 1);
if (!kel2)
fpix2 = fpixConvolve(fpix1, kel1, 1);
else
fpix2 = fpixConvolveSep(fpix1, kel1, kel2, 1);
fpixDestroy(&fpix1);
/* Determine the bias and the dynamic range.
* If the dynamic range is <= 255, just shift the values by the
* bias, if any.
* If the dynamic range is > 255, there are two cases:
* (1) the output depth is not forced to 8 bpp ==> outdepth = 16
* (2) the output depth is forced to 8 ==> linearly map the
* pixel values to [0 ... 255]. */
fpixGetMin(fpix2, &minval, NULL, NULL);
fpixGetMax(fpix2, &maxval, NULL, NULL);
range = maxval - minval;
*pbias = (minval < 0.0) ? -minval : 0.0;
fpixAddMultConstant(fpix2, *pbias, 1.0); /* shift: min val ==> 0 */
if (range <= 255 || !force8) { /* no scaling of output values */
outdepth = (range > 255) ? 16 : 8;
} else { /* scale output values to fit in 8 bpp */
fpixAddMultConstant(fpix2, 0.0, (255.0 / range));
outdepth = 8;
}
/* Convert back to pix; it won't do any clipping */
pixd = fpixConvertToPix(fpix2, outdepth, L_CLIP_TO_ZERO, 0);
fpixDestroy(&fpix2);
return pixd;
}
Here is the solution as i needed it based on the Dan input.
/*!
* pixConvolveWithBias()
* Input: pixs (8 bpp; no colormap)
* kel1
* kel2 (can be null; use if separable)
* outdepth (of pixd: 8, 16 or 32)
* normflag (1 to normalize kernel to unit sum; 0 otherwise)
* bias
* Return: pixd
*
* Notes:
* (1) This does a convolution with either a single kernel or
* a pair of separable kernels, and automatically applies whatever
* bias (shift) is required so that the resulting pixel values
* are non-negative.
* (2) If there are no negative values in the kernel, a convolution
* is performed and bias added.
* (3) If there are negative values in the kernel, the pix is
* converted to an fpix, the convolution is done on the fpix, and
* a bias (shift) is applied.
*/
PIX *
pixConvolveWithBias(PIX *pixs,
L_KERNEL *kel1,
L_KERNEL *kel2,
l_int32 outdepth,
l_int32 normflag,
l_int32 bias)
{
l_float32 min1, min2, min, minval, maxval, range;
FPIX *fpix1, *fpix2;
PIX *pixd;
PROCNAME("pixConvolveWithBias");
if (!pixs || pixGetDepth(pixs) != 8)
return (PIX *)ERROR_PTR("pixs undefined or not 8 bpp", procName, NULL);
if (pixGetColormap(pixs))
return (PIX *)ERROR_PTR("pixs has colormap", procName, NULL);
if (!kel1)
return (PIX *)ERROR_PTR("kel1 not defined", procName, NULL);
/* Determine if negative values can be produced in convolution */
kernelGetMinMax(kel1, &min1, NULL);
min2 = 0.0;
if (kel2)
kernelGetMinMax(kel2, &min2, NULL);
min = L_MIN(min1, min2);
if (min >= 0.0) {
if (!kel2)
pixd = pixConvolve(pixs, kel1, outdepth, normflag);
else
pixd = pixConvolveSep(pixs, kel1, kel2, outdepth, normflag);
pixAddConstantGray(pixd, bias);
} else {
/* Bias may need to be applied; convert to fpix and convolve */
fpix1 = pixConvertToFPix(pixs, 1);
if (!kel2)
fpix2 = fpixConvolve(fpix1, kel1, normflag);
else
fpix2 = fpixConvolveSep(fpix1, kel1, kel2, normflag);
fpixDestroy(&fpix1);
fpixAddMultConstant(fpix2, bias, 1.0);
pixd = fpixConvertToPix(fpix2, outdepth, L_CLIP_TO_ZERO, 0);
fpixDestroy(&fpix2);
}
return pixd;
}
I found this code in this forum but Im having doubts with the code.
in this code snippet "int hi = Convert.ToInt32(Math.Floor(hue / 60)) % 6;" why the complete answer is modulus by 6? (%6)
why is " value = value * 255 " value is multiplied by 255?
im reffering to this research paper (p-15 , p-16) and same algorithmic is discussed but I found these differences.
http://www.poynton.com/PDFs/coloureq.pdf
public static Color ColorFromHSV(double hue, double saturation, double value)
{
int hi = Convert.ToInt32(Math.Floor(hue / 60)) % 6;
double f = hue / 60 - Math.Floor(hue / 60);
value = value * 255;
int v = Convert.ToInt32(value);
int p = Convert.ToInt32(value * (1 - saturation));
int q = Convert.ToInt32(value * (1 - f * saturation));
int t = Convert.ToInt32(value * (1 - (1 - f) * saturation));
if (hi == 0)
return Color.FromArgb(255, v, t, p);
else if (hi == 1)
return Color.FromArgb(255, q, v, p);
else if (hi == 2)
return Color.FromArgb(255, p, v, t);
else if (hi == 3)
return Color.FromArgb(255, p, q, v);
else if (hi == 4)
return Color.FromArgb(255, t, p, v);
else
return Color.FromArgb(255, v, p, q);
}
public void convertToHSV(Color color, out double hue, out double saturation, out double value)
{
int max = Math.Max(color.R, Math.Max(color.G, color.B));
int min = Math.Min(color.R, Math.Min(color.G, color.B));
hue = color.GetHue();
saturation = (max == 0) ? 0 : 1d - (1d * min / max);
value = max / 255d;
}
Regarding
int hi = Convert.ToInt32(Math.Floor(hue / 60)) % 6;
Hue might be represented as larger than 360 or smaller than 0, if there are color transformations involved in other pieces of code that do not make sure to divide mod 360.
If you are 100% sure that all other functions are going to return Hue within [0,360] then the modulo 6 is not needed.
In HSV, Value is typically normalized to the [0,1] continuous interval, whereas in RGB in the discrete [0,255] interval. Hence both:
value = value * 255;
and
value = max / 255d;