Infinite loop in opencv_traincascade CvCascadeClassifier::fillPassedSamples - opencv

So I have been playing around with opencv's newest LBP cascade trainer, and I keep running into infinite loop. I believe the reason may be caused by my limited negative (background) image set. However the program should not run into infinite loop... I managed to identify the location of infinite loop and made some modification to the source code not only to avoid the infinite loop, but also improved the detection performance in the resulting cascade file. However, I would still like someone who understands the code to tell me if this is a proper fix and why it works (or otherwise):
Sample preparation:
So I have one positive image, and used "createsamples" to generate 100 distorted / rotated positive samples:
opencv_createsamples -img positive1.png -num 100 -bg neg.txt -vec samples.vec -maxidev 50 -w 100 -h 62 -maxxangle 0 -maxyangle 0.6 -maxzangle 0.4 -show 1
I have only 5 negative samples in the "negative" directory. Then my training command:
opencv_traincascade -data cascade_result -vec samples.vec -bg neg.txt -numStages 10 -numPos 100 -numNeg 200 -featureType LBP -w 100 -h 62 -bt DAB -minHitRate 0.99 -maxFalseAlarmRate 0.2 -weightTrimRate 0.95 -maxDepth 1
Note that I set -numNeg 200 even though I only have 5 negative images in "neg.txt". Later I found out numNeg does not need to match the number of negative images, as the program "crops" out pieces of images from your negative images repeatedly to use against positive images for training.
At stage 4 is where I run into the infinite loop, and it is in (see "// !!!!!" ):
int CvCascadeClassifier::fillPassedSamples( int first, int count, bool isPositive, int64& consumed )
{
int getcount = 0;
Mat img(cascadeParams.winSize, CV_8UC1);
cout << "isPos: "<< isPositive << "; first: "<< first << "; count: "<< count << endl;
for( int i = first; i < first + count; i++ )
{
int inner_count = 0;
// !!!!! Here is the start of infinite loop
for( ; ; )
{
// !!!!! This is my code to fix the infinite loop:
/*
inner_count++;
if (inner_count > numNeg * 200) // there should be less than 200 crops of negative images per negative image
{
cout << "force exit the loop: inner count: "<< inner_count << "; consumed: " << consumed << endl;
break;
}
*/
bool isGetImg = isPositive ? imgReader.getPos( img ) :
imgReader.getNeg( img );
if( !isGetImg )
return getcount;
consumed++;
featureEvaluator->setImage( img, isPositive ? 1 : 0, i );
if( predict( i ) == 1 )
{
getcount++;
break;
}
}
}
return getcount;
}
I think the problem is imgReader.getNeg(img) keeps cropping from the negative set until "preduct(i) == 1" condition is satisfied to exit the infinite loop. I do not understand what "predict(i)" does, but I do guess that if negative set is small and limited, it will run out of "variety" of images for "predict(i)" to return 1... so loop never finishes. One solution is to increate negative set which is what I am going to try next. The other quicker solution is the code I added in // !!!!! to limite the number of try's to 200 per negative images on average, then force exit the loop if no good candidate is found.
With this fix, my training session went on to stage 5, then stopped there. I put the cascade xml in my app, and it performed reasonably well, better than if I set stop at stage 4 to avoid infinite loop.
I hope someone who understands the code more would enlighten us further...
thank you

joe
you may meet the same problem like mine.
The problem is caused because opencv_traincascade.exe doesn't get the image width and height correctly or the original image width and height are smaller than training window size.
You can add two lines pointed by arrow in the follow code to opencv/appa/traincascade/imagestorage.cpp to solve the problem.
bool CvCascadeImageReader::NegReader::nextImg()
{
Point _offset = Point(0,0);
size_t count = imgFilenames.size();
for( size_t i = 0; i < count; i++ )
{
src = imread( imgFilenames[last++], 0 );
if(src.rows<winSize.height || src.cols < winSize.width) <-----------
continue; <-----------
if( src.empty() )
continue;
....
Hope this solution will help you.

Related

How to export/convert line projection to excel table and order the Y coornidate

I wrote a code that can get line projection (intensity profile) of an image, and I would like to convert/export this line projection (intensity profile) to excel table, and then order all the Y coordinate. For example, except the maximum and minimum values of all the Y coordinate, I would like to know largest 5 coordinate value and smallest coordinate value.
Is there any code can reach this function? Thanks,
image line_projection
Realimage imgexmp
imgexmp := GetFrontImage()
number samples = 256, xscale, yscale, xsize, ysize
GetSize( imgexmp, xsize, ysize )
line_projection := CreateFloatImage( "line projection", Xsize, 1 )
line_projection = 0
line_projection[icol,0] += imgexmp
line_projection /= samples
ShowImage( line_projection )
Finding a 'sorted' list of values
If you need to sort though large lists of values (i.e. large images) the following might not be very sufficient. However, if your aim is to get the "x highest" values with a relatively small number of X, then the following code is just fine:
number nFind = 10
image test := GetFrontImage().ImageClone()
Result( "\n\n" + nFind + " highest values:\n" )
number x,y,v
For( number i=0; i<nFind; i++ )
{
v = max(test,x,y)
Result( "\t" + v + " at " + x + "\n" )
test[x,y] = - Infinity()
}
Working with a copy and subsequently "removing" the maximum value by changing that pixel value. The max command is fast - even for large images -, but the for-loop iteration and setting of individual pixels is slow. Hence this script is too slow for a complete 'sorting' of the data if it is big, but it can quickly get you the n 'highest' values.
This is a non-coding answer:
If you havea LinePlot display in DigitalMicrograph, you can simply copy-paste that into Excel to get the numbers.
i.e. with the LinePlot image front most, preses CTRL + C to copy
(make sure there are no ROIs on it).
Switch to Excel and press CTRL + V. Done.
==>

logistic regression with gradient descent error

I am trying to implement logistic regression with gradient descent,
I get my Cost function j_theta for the number of iterations and fortunately my j_theta is decreasing when plotted j_theta against the number of iteration.
The data set I use is given below:
x=
1 20 30
1 40 60
1 70 30
1 50 50
1 50 40
1 60 40
1 30 40
1 40 50
1 10 20
1 30 40
1 70 70
y= 0
1
1
1
0
1
0
0
0
0
1
The code that I managed to write for logistic regression using Gradient descent is:
%1. The below code would load the data present in your desktop to the octave memory
x=load('stud_marks.dat');
%y=load('ex4y.dat');
y=x(:,3);
x=x(:,1:2);
%2. Now we want to add a column x0 with all the rows as value 1 into the matrix.
%First take the length
[m,n]=size(x);
x=[ones(m,1),x];
X=x;
% Now we limit the x1 and x2 we need to leave or skip the first column x0 because they should stay as 1.
mn = mean(x);
sd = std(x);
x(:,2) = (x(:,2) - mn(2))./ sd(2);
x(:,3) = (x(:,3) - mn(3))./ sd(3);
% We will not use vectorized technique, Because its hard to debug, We shall try using many for loops rather
max_iter=50;
theta = zeros(size(x(1,:)))';
j_theta=zeros(max_iter,1);
for num_iter=1:max_iter
% We calculate the cost Function
j_cost_each=0;
alpha=1;
theta
for i=1:m
z=0;
for j=1:n+1
% theta(j)
z=z+(theta(j)*x(i,j));
z
end
h= 1.0 ./(1.0 + exp(-z));
j_cost_each=j_cost_each + ( (-y(i) * log(h)) - ((1-y(i)) * log(1-h)) );
% j_cost_each
end
j_theta(num_iter)=(1/m) * j_cost_each;
for j=1:n+1
grad(j) = 0;
for i=1:m
z=(x(i,:)*theta);
z
h=1.0 ./ (1.0 + exp(-z));
h
grad(j) += (h-y(i)) * x(i,j);
end
grad(j)=grad(j)/m;
grad(j)
theta(j)=theta(j)- alpha * grad(j);
end
end
figure
plot(0:1999, j_theta(1:2000), 'b', 'LineWidth', 2)
hold off
figure
%3. In this step we will plot the graph for the given input data set just to see how is the distribution of the two class.
pos = find(y == 1); % This will take the postion or array number from y for all the class that has value 1
neg = find(y == 0); % Similarly this will take the position or array number from y for all class that has value 0
% Now we plot the graph column x1 Vs x2 for y=1 and y=0
plot(x(pos, 2), x(pos,3), '+');
hold on
plot(x(neg, 2), x(neg, 3), 'o');
xlabel('x1 marks in subject 1')
ylabel('y1 marks in subject 2')
legend('pass', 'Failed')
plot_x = [min(x(:,2))-2, max(x(:,2))+2]; % This min and max decides the length of the decision graph.
% Calculate the decision boundary line
plot_y = (-1./theta(3)).*(theta(2).*plot_x +theta(1));
plot(plot_x, plot_y)
hold off
%%%%%%% The only difference is In the last plot I used X where as now I use x whose attributes or features are featured scaled %%%%%%%%%%%
If you view the graph of x1 vs x2 the graph would look like,
After I run my code I create a decision boundary. The shape of the decision line seems to be okay but it is a bit displaced. The graph of the x1 vs x2 with decision boundary is given below:
![enter image description here][2]
Please suggest me where am I going wrong ....
Thanks:)
The New Graph::::
![enter image description here][1]
If you see the new graph the coordinated of x axis have changed ..... Thats because I use x(feature scalled) instead of X.
The problem lies in your cost function calculation and/or gradient calculation, your plotting function is fine. I ran your dataset on the algorithm I implemented for logistic regression but using the vectorized technique because in my opinion it is easier to debug.
The final values I got for theta were
theta =
[-76.4242,
0.8214,
0.7948]
I also used alpha = 0.3
I plotted the decision boundary and it looks fine, I would recommend using the vectorized form as it is easier to implement and to debug in my opinion.
I also think your implementation of gradient descent is not quite correct. 50 iterations is just not enough and the cost at the last iteration is not good enough. Maybe you should try to run it for more iterations with a stopping condition.
Also check this lecture for optimization techniques.
https://class.coursera.org/ml-006/lecture/37

How to convert Wifi signal strength from Quality (percent) to RSSI (dBm)?

How should I convert Wifi signal strength from a Quality in percentage, usually 0% to 100% into an RSSI value, usually a negative dBm number (i.e. -96db)?
Wifi Signal Strength Percentage to RSSI dBm
Microsoft defines Wifi signal quality in their WLAN_ASSOCIATION_ATTRIBUTES structure as follows:
wlanSignalQuality:
A percentage value that represents the signal quality of the network.
WLAN_SIGNAL_QUALITY is of type ULONG. This member contains a value
between 0 and 100. A value of 0 implies an actual RSSI signal strength
of -100 dbm. A value of 100 implies an actual RSSI signal strength of
-50 dbm. You can calculate the RSSI signal strength value for
wlanSignalQuality values between 1 and 99 using linear interpolation.
RSSI (or "Radio (Received) Signal Strength Indicator") are in units of 'dB' (decibel) or the similar 'dBm' (dB per milliwatt) (See dB vs. dBm) in which the smaller magnitude negative numbers have the highest signal strength, or quality.
Therefore, the conversion between quality (percentage) and dBm is as follows:
quality = 2 * (dBm + 100) where dBm: [-100 to -50]
dBm = (quality / 2) - 100 where quality: [0 to 100]
Pseudo Code (with example clamping):
// dBm to Quality:
if(dBm <= -100)
quality = 0;
else if(dBm >= -50)
quality = 100;
else
quality = 2 * (dBm + 100);
// Quality to dBm:
if(quality <= 0)
dBm = -100;
else if(quality >= 100)
dBm = -50;
else
dBm = (quality / 2) - 100;
Note:
Check the definition of Quality that you are using for your calculations carefully. Also check the range of dB (or dBm). The limits may vary.
Examples:
Medium quality: 50% -> -75dBm = (50 / 2) - 100
Low quality: -96dBm -> 8% = 2 * (-96 + 100)
In JS I prefer doing something like:
Math.min(Math.max(2 * (x + 100), 0), 100)
My personal opinion is that it's more elegant way to write it, instead of using if's.
From experience:
Less than -50dB (-40, -30 and -20) = 100% of signal strength
From -51 to -55dB= 90%
From -56 to -62dB=80%
From -63 to -65dB=75%
The below is not good enough for Apple devices
From -66 to 68dB=70%
From -69 to 74dB= 60%
From -75 to 79dB= 50%
From -80 to -83dB=30%
Windows laptops can work fine on -80dB however with slower speeds
Im glad I found this post cause I was looking for a way to convert the dbm to percentage. Using David's post, I wrote up a quick script in python to calculate the quality percentage.
#!/usr/bin/env python3
import os
import platform
system = platform.system()
if system == 'Linux':
cmd = "iwconfig wlan0 | grep Signal | /usr/bin/awk '{print $4}' | /usr/bin/cut -d'=' -f2"
elif system == 'Darwin':
cmd = "/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I | grep CtlRSSI | awk '{ print $NF; }"
else:
print("Unsupported os: {}".format(system))
dbm = os.popen(cmd).read()
if dbm:
dbm_num = int(dbm)
quality = 2 * (dbm_num + 100)
print("{0} dbm_num = {1}%".format(dbm_num, quality))
else:
print("Wifi router connection signal strength not found")
In order to get the highest wifi quality from where my computer is located, I moved/rotated my antenna until I received the highest quality. To see real time quality, I ran the above script using:
watch -n0.1 "python getwifiquality.py"
I know this may be late but this may help someone in the future.
I took the value of dBm 30-90 for RSSI and correlated it to 100-0 %.
I used the basic linear equation to get the answer.
y = mx + b
We know our x values for dBm as 30 and 90.
We know our y values for % as 100 and 0.
We just need to find the slope. So we can make it linear.
m = 100-0/30-90
= 100/-60
= -5/3
b = y - mx
= 0 + 5/3*90
= 150
Final equation to put in code when you know the RSSI value.
% = 150 - (5/3) * RSSI
Note I did take the RSSI value that is normally negative and multiplied by the absolute value to get positive numbers.
quality = abs(RSSI)
% = 150 - (5/3) * quality
From RSSI vs RSS:
RSSI - Received Signal Strength Indicator
RSS - Received Signal Strength
RSSI is an indicator and RSS is the real value. Ok, now what do you mean by indicator, indicator mean it can be a relative value and RSSI is always a positive value and there is no unit for the RSSI.
We can say RSSI is for common man to understand. RF values are always told in dBm and the values are negative values most of the time. To make it easy for the people to understand these negative values are converted to positive values through scaling.
Say for example, if the maximum signal strength is 0 dBm and minimum is -100 dBm. We can scale it like as explained. We can put 0 dBm and more (RSS) as 100 RSSI (i. e. maximum RSSI) and -100 dBm (or less) as 0 RSSI (minimum RSS).
This is what i have done :
long rssi = WiFi.RSSI();
rssi=-rssi;
int WiFiperct;
if (rssi<27){
WiFiperct =100;
}
else if(rssi>=27&&rssi<33){
WiFiperct=150-(5/2.7)*rssi;
}
else if(rssi>=33&&rssi<36){
WiFiperct=150-(5/3)*rssi;
}
else if(rssi>=36&&rssi<40){
WiFiperct=150-(5/3.3)*rssi;
}
else if(rssi>=40&&rssi<80){
WiFiperct=150-(5/3.5)*rssi;
}
else if(rssi>=80&&rssi<90){
WiFiperct=150-(5/3.4)*rssi;
}
else if(rssi>=90&&rssi<99){
WiFiperct=150-(5/3.3)*rssi;
}
else{
WiFiperct=0;
}
This article is a more detailed explanation of mW, dBm and RSSI
http://madwifi-project.org/attachment/wiki/UserDocs/RSSI/Converting_Signal_Strength.pdf?format=raw
According to it RSSI do not have a unit. It's a value defined in 802.11 standard and calculated by nic card and sent to OS. The nic card vendor should provide a mapping table of dBm-RSSI values.
Sorry for the direct link, but I can not found the original page for the file link.
Mentioned pseudocode will not work all the ranges, the ranges example (-80dBm to 0, and -40dBm to 100).
Generic simple logic to map any range to 0 to 100.
Usage example, for below code ConvertRangeToPercentage(-80,-40,-50)
int ConvertRangeToPercentage (int a_value_map_to_zero, int a_value_map_to_100, int a_value_to_convert)
{
int percentage = 0;
if (a_value_map_to_zero < a_value_map_to_100)
{
if (a_value_to_convert <= a_value_map_to_zero)
{
percentage = 0;
}
else if (a_value_to_convert >= a_value_map_to_100)
{
percentage = 100;
}
else
{
percentage = (a_value_to_convert - a_value_map_to_zero) * 100 / (a_value_map_to_100 - a_value_map_to_zero);
}
}
else if (a_value_map_to_zero > a_value_map_to_100)
{
if (a_value_to_convert >= a_value_map_to_zero)
{
percentage = 0;
}
else if (a_value_to_convert <= a_value_map_to_100)
{
percentage = 100;
}
else
{
percentage = (a_value_to_convert - a_value_map_to_zero) * 100 / (a_value_map_to_100 - a_value_map_to_zero);
}
}
else
{
percentage = 0;
}
return percentage;
}
Ok.. I agree...but why is then:
Quality=29/100 Signal level=-78 dBm
Quality=89/100 Signal level=-55 dBm
Quality=100/100 Signal level=-21 dBm
this does not agree with the formula
percentage=quality/2 - 100.
Also, you can try inverse this Bash function which converts dBm to percentage:
#!/bin/bash
function dbmtoperc { # Convert dBm to percentage (based on https://www.adriangranados.com/blog/dbm-to-percent-conversion)
dbmtoperc_d=$(echo "$1" | tr -d -)
dbmtoperc_r=0
if [[ "$dbmtoperc_d" =~ [0-9]+$ ]]; then
if ((1<=$dbmtoperc_d && $dbmtoperc_d<=20)); then dbmtoperc_r=100
elif ((21<=$dbmtoperc_d && $dbmtoperc_d<=23)); then dbmtoperc_r=99
elif ((24<=$dbmtoperc_d && $dbmtoperc_d<=26)); then dbmtoperc_r=98
elif ((27<=$dbmtoperc_d && $dbmtoperc_d<=28)); then dbmtoperc_r=97
elif ((29<=$dbmtoperc_d && $dbmtoperc_d<=30)); then dbmtoperc_r=96
elif ((31<=$dbmtoperc_d && $dbmtoperc_d<=32)); then dbmtoperc_r=95
elif ((33==$dbmtoperc_d)); then dbmtoperc_r=94
elif ((34<=$dbmtoperc_d && $dbmtoperc_d<=35)); then dbmtoperc_r=93
elif ((36<=$dbmtoperc_d && $dbmtoperc_d<=38)); then dbmtoperc_r=$((92-($dbmtoperc_d-36)))
elif ((39<=$dbmtoperc_d && $dbmtoperc_d<=51)); then dbmtoperc_r=$((90-($dbmtoperc_d-39)))
elif ((52<=$dbmtoperc_d && $dbmtoperc_d<=55)); then dbmtoperc_r=$((76-($dbmtoperc_d-52)))
elif ((56<=$dbmtoperc_d && $dbmtoperc_d<=58)); then dbmtoperc_r=$((71-($dbmtoperc_d-56)))
elif ((59<=$dbmtoperc_d && $dbmtoperc_d<=60)); then dbmtoperc_r=$((67-($dbmtoperc_d-59)))
elif ((61<=$dbmtoperc_d && $dbmtoperc_d<=62)); then dbmtoperc_r=$((64-($dbmtoperc_d-61)))
elif ((63<=$dbmtoperc_d && $dbmtoperc_d<=64)); then dbmtoperc_r=$((61-($dbmtoperc_d-63)))
elif ((65==$dbmtoperc_d)); then dbmtoperc_r=58
elif ((66<=$dbmtoperc_d && $dbmtoperc_d<=67)); then dbmtoperc_r=$((56-($dbmtoperc_d-66)))
elif ((68==$dbmtoperc_d)); then dbmtoperc_r=53
elif ((69==$dbmtoperc_d)); then dbmtoperc_r=51
elif ((70<=$dbmtoperc_d && $dbmtoperc_d<=85)); then dbmtoperc_r=$((50-($dbmtoperc_d-70)*2))
elif ((86<=$dbmtoperc_d && $dbmtoperc_d<=88)); then dbmtoperc_r=$((17-($dbmtoperc_d-86)*2))
elif ((89<=$dbmtoperc_d && $dbmtoperc_d<=91)); then dbmtoperc_r=$((10-($dbmtoperc_d-89)*2))
elif ((92==$dbmtoperc_d)); then dbmtoperc_r=3
elif ((93<=$dbmtoperc_d)); then dbmtoperc_r=1; fi
fi
echo $dbmtoperc_r
}
Usage:
echo $(dbmtoperc -48)% # returns 81%
Airodump RXQ is really usefull in the real world conditions...
"Receive Quality as measured by the percentage of packets (management and data frames) successfully received over the last 10 seconds."
"Its measured over all management and data frames. The received frames contain a sequence number which is added by the sending access point. RXQ = 100 means that all packets were received from the access point in numerical sequence and none were missing. That's the clue, this allows you to read more things out of this value. Lets say you got 100 percent RXQ and all 10 (or whatever the rate) beacons per second coming in. Now all of a sudden the RXQ drops below 90, but you still capture all sent beacons. Thus you know that the AP is sending frames to a client but you can't hear the client nor the AP sending to the client (need to get closer)."

PGMidi changing pitch sendBytes example

I'm trying the second day to send a midi signal. I'm using following code:
int pitchValue = 8191 //or -8192;
int msb = ?;
int lsb = ?;
UInt8 midiData[] = { 0xe0, msb, lsb};
[midi sendBytes:midiData size:sizeof(midiData)];
I don't understand how to calculate msb and lsb. I tried pitchValue << 8. But it's working incorrect, When I'm looking to events using midi tool I see min -8192 and +8064 max. I want to get -8192 and +8191.
Sorry if question is simple.
Pitch bend data is offset to avoid any sign bit concerns. The maximum negative deviation is sent as a value of zero, not -8192, so you have to compensate for that, something like this Python code:
def EncodePitchBend(value):
''' return a 2-tuple containing (msb, lsb) '''
if (value < -8192) or (value > 8191):
raise ValueError
value += 8192
return (((value >> 7) & 0x7F), (value & 0x7f))
Since MIDI data bytes are limited to 7 bits, you need to split pitchValue into two 7-bit values:
int msb = (pitchValue + 8192) >> 7 & 0x7F;
int lsb = (pitchValue + 8192) & 0x7F;
Edit: as #bgporter pointed out, pitch wheel values are offset by 8192 so that "zero" (i.e. the center position) is at 8192 (0x2000) so I edited my answer to offset pitchValue by 8192.

Scaling a number between two values

If I am given a floating point number but do not know beforehand what range the number will be in, is it possible to scale that number in some meaningful way to be in another range? I am thinking of checking to see if the number is in the range 0<=x<=1 and if not scale it to that range and then scale it to my final range. This previous post provides some good information, but it assumes the range of the original number is known beforehand.
You can't scale a number in a range if you don't know the range.
Maybe what you're looking for is the modulo operator. Modulo is basically the remainder of division, the operator in most languages is is %.
0 % 5 == 0
1 % 5 == 1
2 % 5 == 2
3 % 5 == 3
4 % 5 == 4
5 % 5 == 0
6 % 5 == 1
7 % 5 == 2
...
Sure it is not possible. You can define range and ignore all extrinsic values. Or, you can collect statistics to find range in run time (i.e. via histogram analysis).
Is it really about image processing? There are lots of related problems in image segmentation field.
You want to scale a single random floating point number to be between 0 and 1, but you don't know the range of the number?
What should 99.001 be scaled to? If the range of the random number was [99, 100], then our scaled-number should be pretty close to 0. If the range of the random number was [0, 100], then our scaled-number should be pretty close to 1.
In the real world, you always have some sort of information about the range (either the range itself, or how wide it is). Without further info, the answer is "No, it can't be done."
I think the best you can do is something like this:
int scale(x) {
if (x < -1) return 1 / x - 2;
if (x > 1) return 2 - 1 / x;
return x;
}
This function is monotonic, and has a range of -2 to 2, but it's not strictly a scaling.
I am assuming that you have the result of some 2-dimensional measurements and want to display them in color or grayscale. For that, I would first want to find the maximum and minimum and then scale between these two values.
static double[][] scale(double[][] in, double outMin, double outMax) {
double inMin = Double.POSITIVE_INFINITY;
double inMax = Double.NEGATIVE_INFINITY;
for (double[] inRow : in) {
for (double d : inRow) {
if (d < inMin)
inMin = d;
if (d > inMax)
inMax = d;
}
}
double inRange = inMax - inMin;
double outRange = outMax - outMin;
double[][] out = new double[in.length][in[0].length];
for (double[] inRow : in) {
double[] outRow = new double[inRow.length];
for (int j = 0; j < inRow.length; j++) {
double normalized = (inRow[j] - inMin) / inRange; // 0 .. 1
outRow[j] = outMin + normalized * outRange;
}
}
return out;
}
This code is untested and just shows the general idea. It further assumes that all your input data is in a "reasonable" range, away from infinity and NaN.

Resources