Related
How do I determine which value occurs the most after I filled the array with 100 random values which are between 1 and 11?
Here is a sample code:
procedure TForm1.Button1Click(Sender: TObject);
function Calculate: Integer;
var
Numbers: array [1..100] of Byte;
Counts: array [1..11] of Byte;
I: Byte;
begin
// Fill the array with random numbers
for I := Low(Numbers) to High(Numbers) do
Numbers[I] := Random(11) + 1;
// Count the occurencies
ZeroMemory(#Counts, SizeOf(Counts));
for I := Low(Numbers) to High(Numbers) do
Inc(Counts[Numbers[I]]);
// Identify the maximum
Result := Low(Counts);
for I := Low(Counts) + 1 to High(Counts) do
if Counts[I] > Counts[Result] then
Result := I;
end;
begin
ShowMessage(Calculate.ToString);
end;
It is a simple question [...]
Yes
but I can't seem to find any straight answers online.
You shouldn't be searching for solutions on-line; instead, you should start to think about how to design an algorithm able to solve the problem. For this, you may need pen and paper.
First, we need some data to work with:
const
ListLength = 100;
MinValue = 1;
MaxValue = 11;
function MakeRandomList: TArray<Integer>;
begin
SetLength(Result, ListLength);
for var i := 0 to High(Result) do
Result[i] := MinValue + Random(MaxValue - MinValue + 1);
end;
The MakeRandomList function creates a dynamic array of integers. The array contains ListLength = 100 integers ranging from MinValue = 1 to MaxValue = 11, as desired.
Now, given such a list of integers,
var L := MakeRandomList;
how do we find the most frequent value?
Well, if we were to solve this problem without a computer, using only pen and paper, we would probably count the number of times each distinct value (1, 2, ..., 11) occurs in the list, no?
Then we would only need to find the value with the greatest frequency.
For instance, given the data
2, 5, 1, 10, 1, 5, 2, 7, 8, 5
we would count to find the frequencies
X Freq
2 2
5 3
1 2
10 1
7 1
8 1
Then we read the table from the top line to the bottom line to find the row with the greatest frequency, constantly keeping track of the current winner.
Now that we know how to solve the problem, it is trivial to write a piece of code that performs this algorithm:
procedure FindMostFrequentValue(const AList: TArray<Integer>);
type
TValueAndFreq = record
Value: Integer;
Freq: Integer;
end;
var
Frequencies: TArray<TValueAndFreq>;
begin
if Length(AList) = 0 then
raise Exception.Create('List is empty.');
SetLength(Frequencies, MaxValue - MinValue + 1);
// Step 0: Label the frequency list items
for var i := 0 to High(Frequencies) do
Frequencies[i].Value := i + MinValue;
// Step 1: Obtain the frequencies
for var i := 0 to High(AList) do
begin
if not InRange(AList[i], MinValue, MaxValue) then
raise Exception.CreateFmt('Value out of range: %d', [AList[i]]);
Inc(Frequencies[AList[i] - MinValue].Freq);
end;
// Step 2: Find the winner
var Winner: TValueAndFreq;
Winner.Value := 0;
Winner.Freq := 0;
for var i := 0 to High(Frequencies) do
if Frequencies[i].Freq > Winner.Freq then
Winner := Frequencies[i];
ShowMessageFmt('The most frequent value is %d with a count of %d.',
[Winner.Value, Winner.Freq]);
end;
Delphi has a TDictionary class, which you can use to implement a frequency map, eg:
uses
..., System.Generics.Collections;
function MostFrequent(Arr: array of Integer) : Integer;
var
Frequencies: TDictionary<Integer, Integer>;
I, Freq, MaxFreq: Integer;
Elem: TPair<Integer, Integer>;
begin
Frequencies := TDictionary<Integer, Integer>.Create;
// Fill the dictionary with numbers
for I := Low(Arr) to High(Arr) do begin
if not Frequencies.TryGetValue(Arr[I], Freq) then Freq := 0;
Frequencies.AddOrSetValue(Arr[I], Freq + 1);
end;
// Identify the maximum
Result := 0;
MaxFreq := 0;
for Elem in Frequencies do begin
if Elem.Value > MaxFreq then begin
MaxFreq := Elem.Value;
Result := Elem.Key;
end;
end;
Frequencies.Free;
end;
var
Numbers: array [1..100] of Integer;
I: Integer;
begin
// Fill the array with random numbers
for I := Low(Numbers) to High(Numbers) do
Numbers[I] := Random(11) + 1;
// Identify the maximum
ShowMessage(IntToStr(MostFrequent(Numbers)));
end;
I am also still learning and therefore feel that the way I approached this problem might be a little closer to the way would have done:
procedure TForm1.GetMostOccuring;
var
arrNumbers : array[1..100] of Integer;
iNumberWithMost : Integer;
iNewAmount, iMostAmount : Integer;
I, J : Integer;
begin
for I := 1 to 100 do
arrNumbers[I] := Random(10) + 1;
iMostAmount := 0;
for I := 1 to 10 do
begin
iNewAmount := 0;
for J := 1 to 100 do
if I = arrNumbers[J] then
inc(iNewAmount);
if iNewAmount > iMostAmount then
begin
iMostAmount := iNewAmount;
iNumberWithMost := I;
end;
end;
ShowMessage(IntToStr(iNumberWithMost));
end;
I hope this is not completely useless.
It is just a simple answer to a simple question.
I was trying to implement the following recursive formula to my code
but to my surprise it turns out that after implementing this to DELPHI, I get an error due to division by zero. I am 98% sure that my knot vector is correctly calculated, which in a way means there shouldn't be any divisions by zero. I am 70% sure that the recursive formula is correctly implemented, for that reason I am posting my code here:
program project1;
uses
SysUtils;
Type
TRealPoint = record
x: single;
y: single;
end;
type
TSample = Class(TObject)
public
KnotVector: array of single;
FitPoints: array of TRealPoint;
Degree: integer;
constructor Create; overload;
function Coefficient(i, p: integer; Knot: single): single;
procedure GetKnots;
destructor Destroy; overload;
end;
constructor TSample.Create;
begin
inherited;
end;
function TSample.Coefficient(i, p: integer; Knot: single): single;
var
s1, s2: single;
begin
If (p = 0) then
begin
If (KnotVector[i] <= Knot) And (Knot < KnotVector[i+1]) then Result := 1.0
else Result := 0.0;
end
else
begin
s1 := (Knot - KnotVector[i])*Coefficient(i, p-1, Knot)/(KnotVector[i+p] - KnotVector[i]); //THIS LINE ERRORS due to division by zero ???
s2 := (KnotVector[i+p+1]-Knot)*Coefficient(i+1,p-1,Knot)/(KnotVector[i+p+1]-KnotVector[i+1]);
Result := s1 + s2;
end;
end;
procedure TSample.GetKnots();
var
KnotValue: single;
i, MaxKnot: integer;
begin
// KNOTS
KnotValue:= 0.0;
SetLength(KnotVector, Length(FitPoints) + 1 + Degree);
MaxKnot:= Length(KnotVector) - (2*Degree + 1);
for i := Low(KnotVector) to High(KnotVector) do
begin
if i <= (Degree) then KnotVector[i] := KnotValue / MaxKnot
else if i > Length(FitPoints) then KnotVector[i] := KnotValue / MaxKnot
else
begin
KnotValue := KnotValue + 1.0;
KnotVector[i] := KnotValue / MaxKnot;
end;
end;
end;
destructor TSample.Destroy;
begin
inherited;
end;
var
i, j: integer;
Test: TSample;
N: array of array of single;
begin
Test := TSample.Create;
//define degree
Test.Degree := 3;
//random fit points
j := 15;
SetLength(Test.FitPoints, j + 1 + Test.Degree);
For i := Low(Test.FitPoints) to High(Test.FitPoints) do
begin
Test.FitPoints[i].x := Random()*2000;
Test.FitPoints[i].y := Random()*2000;
end;
//get knot vector
Test.GetKnots;
//get coefficients
SetLength(N, j+1, j+1);
For j := Low(N) to High(N) do
begin
For i := Low(N[j]) to High(N[j]) do
begin
N[j, i] := Test.Coefficient(i,3,Test.KnotVector[j]);
write(floattostrf(N[j,i], ffFixed, 2, 2) + ', ');
end;
writeln();
end;
readln();
Test.Free;
end.
Basically I'm not sure how to continue. I would need the values of matrix N (see this link) of basis coefficients but somehow using the formula from this link leads me to division by zero.
So... Is there a totally different way how to calculate those coefficients or what is the problem here?
UPDATE
Instead of using my own idea i tried to implement the algorithm from here as suggested by Dsm in the comments. As a result, there is no more divison by zero, but the result is totally unexpected anyways.
For n + 1 = 10 random fit points with spline degree 3 the basis matrix N (see link) is singular - as seen from the attached image.
Instead of that I would expect the matrix to be band matrix. Anyway, here is my updated code:
program project1;
uses
SysUtils;
Type
TRealPoint = record
x: single;
y: single;
end;
type
TMatrix = array of array of double;
type
TSample = Class(TObject)
public
KnotVector: array of double;
FitPoints: array of TRealPoint;
SplineDegree: integer;
Temp: array of double;
A: TMatrix;
procedure GetKnots;
function GetBasis(Parameter: double): boolean;
procedure FormBasisMatrix;
end;
procedure TSample.GetKnots();
var
i, j: integer;
begin
// KNOTS
//https://pages.mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/PARA-knot-generation.html
SetLength(KnotVector, Length(FitPoints) + SplineDegree + 1);
for i := Low(KnotVector) to High(KnotVector) do
begin
if i <= SplineDegree then KnotVector[i] := 0
else if i <= (High(KnotVector) - SplineDegree - 1) then KnotVector[i] := (i - SplineDegree) / (Length(FitPoints) - SplineDegree)
else KnotVector[i] := 1;
end;
end;
function TSample.GetBasis(Parameter: double): boolean;
var
m, d, k: integer;
FirstTerm, SecondTerm: double;
begin
//http://pages.mtu.edu/~shene/COURSES/cs3621/NOTES/spline/B-spline/bspline-curve-coef.html
Result := False;
//initialize to 0
SetLength(Temp, Length(FitPoints));
For m := Low(Temp) to High(Temp) do Temp[m] := 0.0;
//special cases
If Abs(Parameter - KnotVector[0]) < 1e-8 then
begin
Temp[0] := 1;
end
else if Abs(Parameter - KnotVector[High(KnotVector)]) < 1e-8 then
begin
Temp[High(Temp)] := 1;
end
else
begin
//find knot span [u_k, u_{k+1})
for k := Low(KnotVector) to High(KnotVector) do if Abs(KnotVector[k] - Parameter) < 1e-8 then break;
Temp[k] := 1.0;
for d := 1 to SplineDegree do
begin
Temp[k - d] := (KnotVector[k + 1] - Parameter) * Temp[k - d + 1] / (KnotVector[k + 1] - KnotVector[k - d + 1]);
for m := k - d + 1 to k - 1 do
begin
FirstTerm := (Parameter - KnotVector[m]) / (KnotVector[m + d] - KnotVector[m]);
SecondTerm := (KnotVector[m + d + 1] - Parameter) / (KnotVector[m + d + 1] - KnotVector[m + 1]);
Temp[m] := FirstTerm * Temp[m] + SecondTerm * Temp[m + 1];
end;
Temp[k] := (Parameter - KnotVector[k]) * Temp[k] / (KnotVector[k + d] - KnotVector[k]);
end;
end;
Result := True;
end;
procedure TSample.FormBasisMatrix;
var
i, j: integer;
begin
SetLength(A, Length(FitPoints), Length(FitPoints));
for j := Low(A) to High(A) do
begin
for i := low(A[j]) to High(A[j]) do //j - row, i - column
begin
If GetBasis(KnotVector[j + SplineDegree]) then A[j, i] := Temp[i];
end;
end;
end;
var
i, j, iFitPoints: integer;
Test: TSample;
N: array of array of single;
begin
Test := TSample.Create;
//define degree
Test.SplineDegree := 3;
//random fit points
iFitPoints := 10;
SetLength(Test.FitPoints, iFitPoints);
For i := Low(Test.FitPoints) to High(Test.FitPoints) do
begin
Test.FitPoints[i].x := Random()*200;
Test.FitPoints[i].y := Random()*200;
end;
//get knot vector
Test.GetKnots;
//get B-Spline basis matrix
Test.FormBasisMatrix;
// print matrix
for j := Low(Test.A) to High(Test.A) do
begin
for i := Low(Test.A) to High(Test.A) do write(FloatToStrF(Test.A[j, i], ffFixed, 2, 2) + ', ');
writeln();
end;
readln();
Test.Free;
end.
This does not appear to be the complete answer, but it may help you on your way, and the result is closer to what you expect, but as I say, not completely there.
First of all the knots do not look right to me. The knots appear to form a 'ramp' function (clamped line), and though I can't work out if 'm' has any specific value, I would expect the function to be continuous, which yours is not. Making it continuous gives better results, e.g.
procedure TSample.GetKnots();
var
i, j: integer;
iL : integer;
begin
// KNOTS
//https://pages.mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/PARA-knot-generation.html
iL := Length( FitPoints );
SetLength(KnotVector, iL + SplineDegree + 1);
// set outer knot values and sum used to geterate first internal value
for i := 0 to SplineDegree - 1 do
begin
KnotVector[ i ] := 0;
KnotVector[ High(KnotVector)-i] := 1;
end;
// and internal ones
for i := 0 to High(KnotVector) - 2* SplineDegree + 1 do
begin
KnotVector[ SplineDegree + i - 1] := i / (iL - 1);
end;
end;
I introduced iL = Length( Fitpoints ) for convenience - it is not important.
The second issue I spotted is more of a programming one. In the GetBasis routine, you evaluate k by breaking a for loop. The problem with that is that k is not guaranteed to persist outside the loop, so your use of it later is not guaranteed to succeed (although it may)
Finally, in the same place, your range determination is completely wrong in my opinion. You should be looking for parameter to lie in a half open line segment, but instead you are looking for it to lie close to an endpoint of that line.
Putting these two together
for k := Low(KnotVector) to High(KnotVector) do if Abs(KnotVector[k] - Parameter) < 1e-8 then break;
should be replaced by
k1 := 0;
for k1 := High(KnotVector) downto Low(KnotVector) do
begin
if Parameter >= KnotVector[k1] then
begin
k := k1;
break;
end;
end;
where k1 is an integer.
I can't help feeling that there is a plus 1 error somewhere, but I can't spot it.
Anyway, I hope that this helps you get a bit further.
To build recursive pyramid for coefficient calculation at intervals, you have to start top level of recursion (inner loop of calculations) from the first real (not duplicate) knot index:
For i := Test.Degree...
Also check the last loop index.
P.S. You can remove constructor and destructor from class description and implementation if they have nothing but inherited.
I am trying to get my first neural network working, though no matter what I do, the network never seems to get to the correct answer.
Here is the output after the network reached an MSE of 0.0001
0 XOR 0 = 0.0118003716248665
1 XOR 1 = 0.994320073237859
1 XOR 0 = 0.818618888320916
0 XOR 1 = 0.985995457430471
The problem: these answers are incorrect.
I create a network with 2 inputs, 2 hidden neurons, and 1 output, the XOR problem has been solved using the same amounts, so that possibility is ruled out (I guess).
As a side note, I converted this code from a C# example found on another site, the C# code executes and works perfectly, so this is most likely a logic error or miscalculation somewhere :/
Now, unfortunately I have been totally unable to find the relevant piece of code leading up to the error, so I am going to have to post the entire code involving the Network here (sorry).
Edit: The UpdateWeights() function is the Back propagation, just thought I'd put this here in-case someone didn't catch it, the rest of the names and stuff is pretty understandable.
unit NeuralNetwork_u;
interface
uses
Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
Dialogs, StdCtrls, ComCtrls, Math;
type TDoubleArray = array of Double;
type TDouble2DArray = array of TDoubleArray;
type TNeuralNetwork = class(TObject)
private
numInput, numHidden, numOutput : Integer;
inputs, hBiases, hSums, hOutputs, oBiases, oSums, Outputs, oGrads, hGrads, hPrevBiasesDelta, oPrevBiasesDelta : TDoubleArray;
ihWeights, hoWeights, ihPrevWeightsDelta, hoPrevWeightsDelta : TDouble2DArray;
public
constructor Create(NumInputs, NumHiddens, NumOutputs : Integer);
procedure SetWeights(weights : TDoubleArray);
function GetWeights : TDoubleArray;
function GetOutputs : TDoubleArray;
function ComputeOutputs( xvalues : TDoubleArray) : TDoubleArray;
function SigmoidFunction( X : Double) : Double;
function HyperTanFunction( X: Double) : Double;
procedure UpdateWeights( tValues : TDoubleArray ; learn, mom : Double);
function Train( TrainData : TDouble2DArray ; MaxEpochs : Integer ; LearningRate, Momentum, DesiredError : Double) : Double;
function WeightCount : Integer;
procedure Shuffle(Seq : array of Integer);
function MeanSquaredError(TrainData : TDouble2DArray) : Double;
end;
type THelper = class(TObject)
public
function MakeMatrix( Rows, Cols : Integer) : TDouble2DArray;
function Error(tValues, yValues : array of Double) : Double;
end;
implementation
uses NetworkInterface_u;
constructor TNeuralNetwork.Create(NumInputs, NumHiddens, NumOutputs : Integer);
var
Helper : THelper;
begin
Helper := THelper.Create;
numInput := NumInputs;
numHidden := NumHiddens;
numOutput := NumOutputs;
SetLength(inputs,numInput);
ihWeights := Helper.MakeMatrix(numInput, numHidden);
SetLength(hBiases,numHidden);
SetLength(hSums, numHidden);
SetLength(hOutputs, numHidden);
howeights := Helper.makeMatrix(numHidden, numOutput);
SetLength(oBiases,numOutput);
SetLength(oSums, numOutput);
SetLength(Outputs, numOutput);
SetLength(oGrads,numOutput);
SetLength(hGrads,numHidden);
ihPrevWeightsDelta := Helper.makeMatrix(numInput,numHidden);
SetLength(hPrevBiasesDelta,numHidden);
hoPrevWeightsDelta := Helper.makeMatrix(numHidden,numOutput);
SetLength(oPrevBiasesDelta,numOutput);
end;
unit NeuralNetwork_u;
interface
uses
Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
Dialogs, StdCtrls, ComCtrls, Math;
type TDoubleArray = array of Double;
type TDouble2DArray = array of TDoubleArray;
type TNeuralNetwork = class(TObject)
private
numInput, numHidden, numOutput : Integer;
inputs, hBiases, hSums, hOutputs, oBiases, oSums, Outputs, oGrads, hGrads, hPrevBiasesDelta, oPrevBiasesDelta : TDoubleArray;
ihWeights, hoWeights, ihPrevWeightsDelta, hoPrevWeightsDelta : TDouble2DArray;
public
constructor Create(NumInputs, NumHiddens, NumOutputs : Integer);
procedure SetWeights(weights : TDoubleArray);
function GetWeights : TDoubleArray;
function GetOutputs : TDoubleArray;
function ComputeOutputs( xvalues : TDoubleArray) : TDoubleArray;
function SigmoidFunction( X : Double) : Double;
function HyperTanFunction( X: Double) : Double;
procedure UpdateWeights( tValues : TDoubleArray ; learn, mom : Double);
function Train( TrainData : TDouble2DArray ; MaxEpochs : Integer ; LearningRate, Momentum, DesiredError : Double) : Double;
function WeightCount : Integer;
procedure Shuffle( var Seq : array of Integer);
function MeanSquaredError(TrainData : TDouble2DArray) : Double;
end;
type THelper = class(TObject)
public
function MakeMatrix( Rows, Cols : Integer) : TDouble2DArray;
function Error(tValues, yValues : array of Double) : Double;
end;
implementation
uses NetworkInterface_u;
constructor TNeuralNetwork.Create(NumInputs, NumHiddens, NumOutputs : Integer);
var
Helper : THelper;
begin
Helper := THelper.Create;
numInput := NumInputs;
numHidden := NumHiddens;
numOutput := NumOutputs;
SetLength(inputs,numInput);
ihWeights := Helper.MakeMatrix(numInput, numHidden);
SetLength(hBiases,numHidden);
SetLength(hSums, numHidden);
SetLength(hOutputs, numHidden);
howeights := Helper.makeMatrix(numHidden, numOutput);
SetLength(oBiases,numOutput);
SetLength(oSums, numOutput);
SetLength(Outputs, numOutput);
SetLength(oGrads,numOutput);
SetLength(hGrads,numHidden);
ihPrevWeightsDelta := Helper.makeMatrix(numInput,numHidden);
SetLength(hPrevBiasesDelta,numHidden);
hoPrevWeightsDelta := Helper.makeMatrix(numHidden,numOutput);
SetLength(oPrevBiasesDelta,numOutput);
end;
procedure TNeuralNetwork.SetWeights(weights : TDoubleArray);
var
numWeights : Integer;
i, k, j : Integer;
begin
numWeights := (numInput * numHidden) + (numHidden * numOutput) + numHidden + numOutput;
if High(weights) <> numWeights then
begin
Raise Exception.Create('The Weights Array Length Does Not match The Total Number Of Weights And Biases - ' + IntToStr(numWeights));
end;
k := 0;
for i := 0 to numInput-1 do
begin
for j := 0 to numHidden-1 do
begin
ihWeights[i][j] := weights[k];
Inc(k);
end;
end;
for i := 0 to numHidden-1 do
begin
hBiases[i] := weights[k];
Inc(k);
end;
for i := 0 to numHidden-1 do
begin
for j := 0 to numOutput-1 do
begin
hoWeights[i][j] := weights[k];
Inc(k);
end;
end;
for i := 0 to numOutput-1 do
begin
oBiases[i] := weights[k];
Inc(k);
end;
end;
function TNeuralNetwork.GetWeights : TDoubleArray;
var
numWeights : Integer;
k, i, j : Integer;
begin
numWeights := (numInput * numHidden) + (numHidden * numOutput) + numHidden + numOutput;
SetLength(Result,numWeights);
k := 0;
for i := 0 to Length(ihWeights)-1 do
begin
for j := 0 to Length(ihWeights[0])-1 do
begin
Result[k] := ihWeights[i][j];
Inc(k);
end;
end;
for i := 0 to Length(hBiases)-1 do
begin
Result[k] := hBiases[i];
Inc(k);
end;
for i := 0 to Length(hoWeights)-1 do
begin
for j := 0 to Length(hoWeights[0])-1 do
begin
Result[k] := hoWeights[i][j] ;
Inc(k);
end;
end;
for i := 0 to Length(oBiases)-1 do
begin
Result[k] := oBiases[i];
Inc(k);
end;
end;
function TNeuralnetwork.GetOutputs : TDoubleArray;
begin
SetLength(Result, numOutput-1);
Result := Outputs;
end;
Function TNeuralNetwork.ComputeOutputs( xValues : TDoubleArray) : TDoubleArray;
var
i, j : Integer;
begin
if Length(xvalues) <> numInput then
begin
raise Exception.Create('Inputs Array Does Not Match Neural Network Inputs Count = Array ' + IntToStr(Length(xValues)) + ' Input Count ' + IntToStr(numInput));
end;
for i := 0 to numHidden-1 do
begin
hSums[i] := 0.0;
end;
for i := 0 to numOutput-1 do
begin
oSums[i] := 0.0;
end;
for i := 0 to Length(xValues)-1 do
begin
inputs[i] := xValues[i];
end;
for j := 0 to numHidden-1 do
begin
for i := 0 to numInput-1 do
begin
hSums[j] := hSums[j] + (inputs[i]*ihWeights[i][j]);
end;
end;
for i := 0 to numHidden-1 do
begin
hSums[i] := hSums[i] + hBiases[i];
end;
for i := 0 to numHidden-1 do
begin
hOutputs[i] := HyperTanFunction(hSums[i]);
end;
for j := 0 to numOutput-1 do
begin
for i := 0 to numHidden-1 do
begin
oSums[j] := oSums[j] + (hOutputs[i] * hoWeights[i][j]);
end;
end;
for i := 0 to numOutput-1 do
begin
oSums[i] := oSums[i] + oBiases[i];
end;
for i := 0 to numOutput-1 do
begin
Outputs[i] := HyperTanFunction(oSums[i]);
end;
Result := Outputs;
end;
function TNeuralNetwork.SigmoidFunction(X : Double) : Double;
begin
if x < -45.0 then
Result := 0
else if x > 45.0 then
Result := 1
else
Result := 1.0 / (1.0 + Exp(-x));
end;
function TNeuralNetwork.HyperTanFunction( X : Double) : Double;
begin
if x < -45.0 then
Result := -1
else if x > 45.0 then
Result := 1
else
Result := Tanh(X);
end;
procedure TNeuralNetwork.UpdateWeights(tValues : TDoubleArray ; learn, mom : Double);
var
i, j : Integer;
derivative, sum, delta,X : Double;
begin
if Length(tValues) <> numOutput then
begin
Raise Exception.Create('Target Values Not Same Length As Output = ' + IntToStr(Length(tValues)) + ' - Outputcount = ' + IntToStr(numOutput));
end;
for i := 0 to Length(oGrads)-1 do
begin
derivative := (1 - outputs[i]) * outputs[i];
oGrads[i] := derivative * (tValues[i] - outputs[i]);
end;
for i := 0 to Length(hGrads)-1 do
begin
derivative := (1 - hOutputs[i]) * (1 + hOutputs[i]);
sum := 0;
for j := 0 to numOutput-1 do
begin
X := oGrads[j] * hoWeights[i][j];
sum := sum + X;
end;
hGrads[i] := derivative * sum;
end;
for i := 0 to Length(ihWeights)-1 do
begin
for j := 0 to Length(ihWeights[0])-1 do
begin
delta := learn * hGrads[j] * inputs[i];
ihWeights[i][j] := ihWeights[i][j] + delta;
ihWeights[i][j] := ihWeights[i][j] + (mom * ihPrevWeightsDelta[i][j]);
ihPrevWeightsDelta[i][j] := delta;
end;
end;
for i := 0 to Length(hBiases)-1 do
begin
delta := learn * hGrads[i] * 1.0;
hBiases[i] := hBiases[i] + delta;
hBiases[i] := hBiases[i] + (mom * hPrevBiasesDelta[i]);
hPrevBiasesDelta[i] := delta;
end;
for i := 0 to Length(hoWeights)-1 do
begin
for j := 0 to Length(hoWeights[0])-1 do
begin
delta := learn * oGrads[j] * hOutputs[i];
hoWeights[i][j] := hoWeights[i][j] + delta;
hoWeights[i][j] := hoWeights[i][j] + (mom * hoPrevWeightsDelta[i][j]);
hoPrevWeightsDelta[i][j] := delta;
end;
end;
for i := 0 to Length(oBiases)-1 do
begin
delta := learn * oGrads[i] * 1.0;
oBiases[i] := oBiases[i] + delta;
oBiases[i] := oBiases[i] + (mom * oPrevBiasesDelta[i]);
oPrevBiasesDelta[i] := delta;
end;
end;
function TNeuralNetwork.Train( TrainData : TDouble2DArray ; MaxEpochs : Integer ; LearningRate, Momentum, DesiredError : Double) : Double;
var
Epoch, I, Idx, c : Integer;
xValues : TDoubleArray;
tValues : TDoubleArray;
Sequence : Array of Integer;
MeanSquaredErrorr : Double;
Helper : THelper;
begin
Epoch := 0;
SetLength(xValues,numInput);
SetLength(tValues,numOutput+1);
SetLength(Sequence,Length(TrainData));
for I := 0 to Length(Sequence)-1 do
Sequence[I] := I;
Shuffle(Sequence);
while Epoch < MaxEpochs do
begin
frmNetworkInterface.redTraining.Lines.Add('Current Epoch - ' + IntToStr(Epoch) + ' : error = ' + FloatToStr(MeanSquaredErrorr) + ' and Desired Error is = ' + FloatToStr(DesiredError));
Application.ProcessMessages;
MeanSquaredErrorr := MeanSquaredError(TrainData);
if MeanSquaredErrorr < DesiredError then
Break;
for I := 0 to Length(TrainData)-1 do
begin
Idx := Sequence[i];
xValues := Copy(TrainData[Idx],0,numInput);
tValues := Copy(TrainData[Idx],numInput,numOutput);
ComputeOutputs(xValues);
UpdateWeights(tValues,LearningRate,Momentum);
end;
Inc(Epoch);
Result := MeanSquaredErrorr;
end;
end;
procedure TNeuralNetwork.Shuffle( var Seq : array of Integer);
var
I, R, Tmp : Integer;
begin
for I := 0 to Length(Seq)-1 do
begin
R := RandomRange(i,Length(Seq));
Tmp := Seq[i];
Seq[R] := Seq[I];
Seq[I] := Tmp;
end;
end;
function TNeuralNetwork.MeanSquaredError(TrainData : TDouble2DArray) : Double;
var
sumSquaredError, err : Double;
xValues, tValues, yValues : TDoubleArray;
I, J : Integer;
begin
sumSquaredError := 0;
SetLength(xValues,numInput);
SetLength(tvalues,numOutput);
for I := 0 to Length(TrainData)-1 do
begin
xValues := Copy(TrainData[I],0,numInput);
tValues := Copy(TrainData[I],numInput,numOutput);
yValues := ComputeOutputs(xValues);
for J := 0 to numOutput-1 do
begin
err := tValues[j] - yValues[j];
sumSquaredError := sumSquaredError + (err * err);
end;
end;
Result := sumSquaredError / Length(TrainData);
end;
function TNeuralNetwork.WeightCount : Integer;
begin
Result := (numInput * numHidden) + (numHidden * numOutput) + numHidden + numOutput;
end;
function THelper.MakeMatrix(Rows, Cols : Integer) : TDouble2DArray;
begin
SetLength(Result,Rows,Cols);
end;
function THelper.Error(tValues : array of Double ; yValues : array of Double) : Double;
var
sum : Double;
i : Integer;
begin
sum := 0.0;
for i := 0 to High(tValues)-1 do
begin
sum := sum + ((tValues[i] - yValues[i]) * (tValues[i] - yValues[i]));
end;
Result := Sqrt(sum);
end;
end.
I have been through this code nearly a hundred times now, and no answer comes to light, no logic error or miscalculation found, however, as I know the C# example works, this should too.
EDIT :
Observation: it seems to me, whenever the second value I pass in is 1, the network automatically makes the output way too big (values of the weights involved with the second input are way too large for my tastes?), hence 1 XOR 1 is wrong, because the second value is 1 (see the data above).
EDIT :
Here is the initial weights of one network i just ran (2 input, 2 hidden, 1 output)
Initial Weight0 - 0.0372207039175555
Initial Weight1 - 0.01092082898831
Initial Weight2 - 0.0755334409791976
Initial Weight3 - 0.0866588755254634
Initial Weight4 - 0.0626101282471791
Initial Weight5 - 0.0365478269639425
Initial Weight6 - 0.0724486718699336
Initial Weight7 - 0.0320405319170095
Initial Weight8 - 0.0680674042692408
And after 132 Epochs (an error of 0.001)
Final Weight 0 = 0.432341693850932
Final Weight 1 = 0.338041456780997
Final Weight 2 = 1.0096817584107
Final Weight 3 = 0.839104863469981
Final Weight 4 = -0.275763414588823
Final Weight 5 = -0.171414938983027
Final Weight 6 = 1.26394969109634
Final Weight 7 = 0.998915778388676
Final Weight 8 = 0.549501870374428
EDIT: So a new development has come to light, an error when passing in the TrainingData caused it to identify 1 XOR 1 = 1, however, after fixing this error, the network cant converge on an answer (ran 100 networks, 10 thousand epochs each) the lowest MSE (Mean Squared Error) I got was
Current Epoch - 9999 : error = 0.487600332892658 and Desired Error is = 0.001
I logged the input and outputs sent into the network on each epoch of training and identified that they all were correct now, so now it just seems that the network is unable to solve the problem?
Also, I'm updating the code to my most recent version. (08/26/2015)
Whats new in this code :
Fixed Copy indices that was 1 instead of 0.
Can confirm that the Inputs and Desired outputs is correctly copied now.
EDIT : The MSE of the network is actually INCREASING now, here is the initial error :
0.467486419821747,
and after 10000 Epochs,
0.487600332892658,
the overall error increased with
0.020113913070917
... This leads me to believe that either my Training procedure, or the UpdateWeights procedure is faulty...
EDIT : Another observation I made, is that the Mean Square Error of the network caps out on 2.5 ( when running an insanely long training session to make it move that much ) The damn MSE is going up instead of going down??
EDIT : Another observation of output of the network whilst training
Current Epoch - 233 : error = 0.802251346201161 and Desired Error is = 0.0001
Current Epoch - 234 : error = 1.24798705066641 and Desired Error is = 0.0001
Current Epoch - 235 : error = 2.47206076545025 and Desired Error is = 0.0001
Current Epoch - 236 : error = 2.49999999811955 and Desired Error is = 0.0001
A radical jump from 1.24 was seen to 2.49 , The network is clearly getting errors in the functions having to do with either training or weight changing.
I suspect from your code that the original works in degrees (because you use constants like -45.0) and Delphi works in radians, so wherever you have 45 you need (PI/4) etc. It is likely that during training you are running out of epochs before you reach the accuracy required because the functions can go out of range.
source array(4 bytes)
[$80,$80,$80,$80] =integer 0
[$80,$80,$80,$81] = 1
[$80,$80,$80,$FF] = 127
[$80,$80,$81,$01] = 128
need to convert this to integer.
below is my code and its working at the moment.
function convert(b: array of Byte): Integer;
var
i, st, p: Integer;
Negative: Boolean;
begin
result := 0;
st := -1;
for i := 0 to High(b) do
begin
if b[i] = $80 then Continue // skip leading 80
else
begin
st := i;
Negative := b[i] < $80;
b[i] := abs(b[i] - $80);
Break;
end;
end;
if st = -1 then exit;
for i := st to High(b) do
begin
p := round(Power(254, High(b) - i));
result := result + b[i] * p;
result := result - (p div 2);
end;
if Negative then result := -1 * result
end;
i'm looking for a better function?
Update:
file link
https://drive.google.com/file/d/0ByBA4QF-YOggZUdzcXpmOS1aam8/view?usp=sharing
in uploaded file ID field offset is from 5 to 9
NEW:
Now i got into new problem which is decoding date field
Date field hex [$80,$8F,$21,$C1] -> possible date 1995-12-15
* in uploaded file date field offset is from 199 to 203
Just an example of some improvements as outlined by David.
The array is passed by reference as a const.
The array is fixed in size.
The use of floating point calculations are converted directly into a constant array.
Const
MaxRange = 3;
Type
TMySpecial = array[0..MaxRange] of Byte;
function Convert(const b: TMySpecial): Integer;
var
i, j: Integer;
Negative: Boolean;
Const
// Pwr[i] = Round(Power(254,MaxRange-i));
Pwr: array[0..MaxRange] of Cardinal = (16387064,64516,254,1);
begin
for i := 0 to MaxRange do begin
if (b[i] <> $80) then begin
Negative := b[i] < $80;
Result := Abs(b[i] - $80)*Pwr[i] - (Pwr[i] shr 1);
for j := i+1 to MaxRange do
Result := Result + b[j]*Pwr[j] - (Pwr[j] shr 1);
if Negative then
Result := -Result;
Exit;
end;
end;
Result := 0;
end;
Note that less code lines is not always a sign of good performance.
Always measure performance before optimizing the code in order to find real bottlenecks.
Often code readability is better than optimizing over the top.
And for future references, please tell us what the algorithm is supposed to do.
Code for testing:
const
X : array[0..3] of TMySpecial =
(($80,$80,$80,$80), // =integer 0
($80,$80,$80,$81), // = 1
($80,$80,$80,$FF), // = 127
($80,$80,$81,$01)); // = 128
var
i,j: Integer;
sw: TStopWatch;
begin
sw := TStopWatch.StartNew;
for i := 1 to 100000000 do
for j := 0 to 3 do
Convert(X[j]);
WriteLn(sw.ElapsedMilliseconds);
ReadLn;
end.
I'm looking for a function somewhere in Delphi XE2 similar to Inc() which allows me to add/subtract a number of degrees from a current number of degrees and result in the new degrees. For example, if I have a point currently at 5 degrees around a circle, and I want to subtract 10, I should not get -5 degrees, but rather 355 (360 - 5). Same as adding past 360 - it should go back to 0 when it reaches 360.
Is there anything like this already in Delphi so I don't have to re-write it? Perhaps in the Math unit?
uses
System.SysUtils,Math;
Function WrapAngle( angle : Double) : Double;
Const
modAngle : Double = 360.0;
begin
Result := angle - modAngle*Floor(angle/modAngle);
end;
begin
WriteLn(FloatToStr(WrapAngle(-5)));
WriteLn(FloatToStr(WrapAngle(5-720)));
WriteLn(FloatToStr(WrapAngle(360)));
ReadLn;
end.
Produces result:
355
5
0
Update:
As #Giel found, in XE3 there is a new function DegNormalize() which does the job. Even about 25% faster. The trick is to replace the Floor() call with an Int() instead, and if the result is negative, add modAngle to the result.
function WrapAngle(Value: Integer): Integer;
begin
Result := Value mod 360;
if Result < 0 then
Inc(Result, 360);
end;
The code I use to perform this task is:
function PosFrac(x: Double): Double;
(* PosFrac(1.2)=0.2 and PosFrac(-1.2)=0.8. *)
begin
Result := Frac(x); (* Frac(x)=x-Int(x) *)
if Result<0.0 then begin
Result := 1.0+Result;
end;
end;
function ModR(const x, y: Double): Double;
(* ModR(1.2,1)=0.2 and ModR(-1.2,1)=0.8 *)
var
absy: Double;
begin
if y=0.0 then begin
Result := 0.0;
end else begin
absy := abs(y);
Result := PosFrac(x/absy)*absy;
end;
end;
function Mod360(const x: Double): Double;
begin
Result := ModR(x, 360.0);
end;
This code will bring all angles into the range 0 to 360. For example:
Writeln(Round(Mod360(5-10)));
Writeln(Round(Mod360(5-360)));
Writeln(Round(Mod360(5-720)));
Writeln(Round(Mod360(5+720)));
outputs:
355
5
5
5
I don't know any, but I'd prefer using a more general solution anyway ...
Procedure IncOverFlow(var Value:Double;Difference:Double;Limit:Double=360);
begin
Value := Value + Difference;
While Value < 0 do Value := Value + Limit;
While Value >= Limit do Value := Value -Limit;
end;
procedure WrapAngle(var Degs: Integer);
begin
Degs := Degs mod 360;
if Degs < 0 then
Inc(Degs, 360);
end;