Regarding precision and recall - machine-learning

Suppose we have 99% non-span and 1% span. Here I have written function as below
function y = predictSpam(x)
y = 0;
return
here we have true positive's are zero. And accuracy is 99%. In this case precision and recall is zero. Is my understanding is right? Request to provide to fill in below table in case of below scenario
actualclass1 | actualclass0
predict class1 0 | 0
-------------------------------------------------
predict class0 1 | 99
m = 100. Is above table is filled correctly.

When using precision and recall I quite always look again this image:
So we have:
precision = true_positive / true_positive + false_positive
recall = true_positive / true_positive + false_negative
In your data, 99 is correctly classified 0, 1 is classified 0 when it should be 1.
With your data:
- true_positive = 0
- true_negative = 99
- false_positive = 0
- false_negative = 1
Your true positive is 0, so yes, both recall and precision will be 0.
Accuracy is indeed 99%.

Related

Problem using GEKKO to do regime change detection (estimating time-varying variables)

Using GEKKO Python, we have trouble trying to learn a parameter that can vary multiple times per day. In some disciplines this is also known as 'regime detection or regime change detection'. We (me and my colleague Henri ter Hofte from Windesheim University of Applied Sciences) conceived 3 strategies but fail (more below).
Our question(s):
What are we doing wrong, is there an obvious error in our GEKKO code (more below in the details)?
Is strategy I doomed to fail, and should we switch to strategy II or II?
Is GEKKO Python even suitable for doing this kind of regime (change) detection?
Your help is much appreciated.
=== The problem:
We have time series data about:
(1) CO₂ concentration
(2) ventilation rates (or rather: valve fractions, which give ventilation rates, when multiplied with known maximum ventilation rates)
(3) occupancy (number of persons in a room)
For research question (A) we would like to know a proper estimate for (2) for each hour of the day, given time series data about (1) and (3).
For research question (B) we would like to know a proper estimate for (3) for each hour of the day, given time series data about (1) and (2).
We focus on research question A (but have similar questions for B).
=== The 3 strategies:
We considered 3 different strategies how to implement this using GEKKO Python:
Strategy I. Declare the variable valve_frac as a Manipulated Variabe in our GEKKO model (m.MV), since the GEKKO documentation says these variables can be "adjusted by the optimizer to minimize an objective function at every time point". and "Manipulated variables are like FVs, but can change with each data row, either calculated by the optimizer (STATUS=1) or specified by the user (STATUS=0)." according to https://gekko.readthedocs.io/en/latest/imode.html#mv
Strategy II. Split the time into several shorter time spans (e.g. one time span per hour) and then learn valve_frac as a GEKKO Fixed Variable (m.FV), one for each hour.
Strategy III. Reframe the problem to GEKKO as if it were a control problem: the setpoint is reaching a particular CO2 concentration and GEKKO has can use valve_frac as a Control Variable (m.CV)
We tried implementing strategy I (see more info and code below) but fail to get proper results.
Considering some equation derived from physics, we intend to find the best value for some specific variable (valve_frac__0 variable in following table. Having a dataframe (df_learn) like this:
Index
Date-Time
occupancy__p
valve_frac__0
co2__ppm
1
2022.12.01 – 00:00:00
0
0.51
546
2
2022.12.01 – 00:15:00
4
0.85
820
3
2022.12.01 – 00:30:00
1
0.21
595
4
2022.12.01 – 00:45:00
2
0.74
635
5
2022.12.01 – 00:15:00
0
0.65
559
6
2022.12.01 – 00:15:00
0
0.45
538
7
2022.12.01 – 00:15:00
2
0.82
659
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1920
2022.12.20 – 00:15:00
3
0.73
749
We are trying to develop a moving horizon estimation model (IMODE=5) or Control model (IMODE=6) to predict the valve_frac__0 value.
Following comes the code in Gekko format:
=== Code:
from gekko import GEKKO
# Gekko Model - Initialize
m = GEKKO(remote = False)
m.time = np.arange(0, duration__s, step__s)
# Conversion factors
s_min_1 = 60
min_h_1 = 60
s_h_1 = s_min_1 * min_h_1
mL_m_3 = 1e3 * 1e3
million = 1e6
# Constants
MET__mL_min_1_kg_1_p_1 = 3.5
desk_work__MET = 1.5
P_std__Pa = 101325
R__m3_Pa_K_1_mol_1 = 8.3145
T_room__degC = 20.0
T_std__degC = 0.0
T_zero__K = 273.15
T_std__K = T_zero__K + T_std__degC
T_room__K = T_zero__K + T_room__degC
infilt__m2 = 0.001
# Approximations
room__mol_m_3 = P_std__Pa / (R__m3_Pa_K_1_mol_1 * T_room__K)
std__mol_m_3 = P_std__Pa / (R__m3_Pa_K_1_mol_1 * T_std__K)
co2_ext__ppm = 415
# National averages
weight__kg = 77.5
MET__m3_s_1_p_1 = MET__mL_min_1_kg_1_p_1 * weight__kg / (s_min_1 * mL_m_3)
MET_mol_s_1_p_1 = MET__m3_s_1_p_1 * std__mol_m_3
co2_o2 = 0.894
co2__mol0_p_1_s_1 = co2_o2 * desk_work__MET * MET_mol_s_1_p_1
# Room averages
wind__m_s_1 = 3.0
# GEKKO Manipulated Variables: measured values
occupancy__p = m.MV(value = df_learn.occupancy__p.values)
occupancy__p.STATUS = 0; occupancy__p.FSTATUS = 1
# Strategy I:
valve_frac__0 = m.MV(value = df_learn.valve_frac__0.values)
valve_frac__0.STATUS = 1; valve_frac__0.FSTATUS = 0
# Strategy II:
#valve_frac__0 = m.FV(value = df_learn.valve_frac__0.values)
#valve_frac__0.STATUS = 1; valve_frac__0.FSTATUS = 0
# GEKKO Control Varibale (predicted variable)
co2__ppm = m.CV(value = df_learn.co2__ppm.values)
co2__ppm.STATUS = 1; co2__ppm.FSTATUS = 1
# GEKKO - Equations
co2_loss__ppm_s_1 = m.Intermediate((co2__ppm - co2_ext__ppm) * (vent_max__m3_s_1 * valve_frac__0 + wind__m_s_1 * infilt__m2) / room__m3)
co2_gain_mol0_s_1 = m.Intermediate(occupancy__p * co2__mol0_p_1_s_1 / (room__m3 * room__mol_m_3))
co2_gain__ppm_s_1 = m.Intermediate(co2_gain_mol0_s_1 * million)
m.Equation(co2__ppm.dt() == co2_gain__ppm_s_1 - co2_loss__ppm_s_1)
# GEKKO - Solver setting
m.options.IMODE = 5
m.options.EV_TYPE = 1
m.options.NODES = 2
m.solve(disp = False)
The result which I got for each scenario come as follow:
Strategy I:
There is no output for simulated “co2__ppm” and the output value for
valve_frac__0 = 0
Strategy II:
There is big difference between simulated and measured “co2__ppm” and the output value for
valve_frac__0 = 0.166 (which is not reasonable)
The code looks like it should work as long as valve_frac__0 is the adjustable unknown parameter that should be estimated from the CO2 PPM data. Here is a result on a smaller subset of the posted data.
The data doesn't fit exactly if there is a lower bound of zero on the valve position.
valve_frac__0 = m.MV(value = valve_frac__0,lb=0)
Otherwise, the valve position can be adjusted to fit the CO2 data perfectly.
Here is a complete script with the sample data.
from gekko import GEKKO
import numpy as np
# Gekko Model - Initialize
m = GEKKO(remote = False)
# data
# 1 2022.12.01 – 00:00:00 0 0.51 546
# 2 2022.12.01 – 00:15:00 4 0.85 820
# 3 2022.12.01 – 00:30:00 1 0.21 595
# 4 2022.12.01 – 00:45:00 2 0.74 635
# 5 2022.12.01 – 00:15:00 0 0.65 559
# 6 2022.12.01 – 00:15:00 0 0.45 538
# 7 2022.12.01 – 00:15:00 2 0.82 659
occupancy__p = np.array([0,4,1,2,0,0,2])
valve_frac__0 = np.array([0.51,0.85,0.21,0.74,0.65,0.45,0.82])
co2__ppm_meas = np.array([546,820,595,635,559,538,659])
duration__s = len(co2__ppm_meas)
m.time = np.linspace(0,duration__s-1,duration__s)
vent_max__m3_s_1 = 1
room__m3 = 1
# Conversion factors
s_min_1 = 60
min_h_1 = 60
s_h_1 = s_min_1 * min_h_1
mL_m_3 = 1e3 * 1e3
million = 1e6
# Constants
MET__mL_min_1_kg_1_p_1 = 3.5
desk_work__MET = 1.5
P_std__Pa = 101325
R__m3_Pa_K_1_mol_1 = 8.3145
T_room__degC = 20.0
T_std__degC = 0.0
T_zero__K = 273.15
T_std__K = T_zero__K + T_std__degC
T_room__K = T_zero__K + T_room__degC
infilt__m2 = 0.001
# Approximations
room__mol_m_3 = P_std__Pa / (R__m3_Pa_K_1_mol_1 * T_room__K)
std__mol_m_3 = P_std__Pa / (R__m3_Pa_K_1_mol_1 * T_std__K)
co2_ext__ppm = 415
# National averages
weight__kg = 77.5
MET__m3_s_1_p_1 = MET__mL_min_1_kg_1_p_1 \
* weight__kg / (s_min_1 * mL_m_3)
MET_mol_s_1_p_1 = MET__m3_s_1_p_1 * std__mol_m_3
co2_o2 = 0.894
co2__mol0_p_1_s_1 = co2_o2 * desk_work__MET * MET_mol_s_1_p_1
# Room averages
wind__m_s_1 = 3.0
# GEKKO Manipulated Variables: measured values
occupancy__p = m.MV(value = occupancy__p)
occupancy__p.STATUS = 0; occupancy__p.FSTATUS = 1
# Strategy I:
valve_frac__0 = m.MV(value = valve_frac__0,lb=0)
valve_frac__0.STATUS = 1; valve_frac__0.FSTATUS = 0
# Strategy II:
#valve_frac__0 = m.FV(value = df_learn.valve_frac__0.values)
#valve_frac__0.STATUS = 1; valve_frac__0.FSTATUS = 0
# GEKKO Control Varibale (predicted variable)
co2__ppm = m.CV(value = co2__ppm_meas)
co2__ppm.STATUS = 1; co2__ppm.FSTATUS = 1
# GEKKO - Equations
co2_loss__ppm_s_1 = m.Intermediate((co2__ppm - co2_ext__ppm) \
* (vent_max__m3_s_1 * valve_frac__0 \
+ wind__m_s_1 * infilt__m2) / room__m3)
co2_gain_mol0_s_1 = m.Intermediate(occupancy__p \
* co2__mol0_p_1_s_1 / (room__m3 * room__mol_m_3))
co2_gain__ppm_s_1 = m.Intermediate(co2_gain_mol0_s_1 * million)
m.Equation(co2__ppm.dt() == co2_gain__ppm_s_1 - co2_loss__ppm_s_1)
# GEKKO - Solver setting
m.options.IMODE = 5
m.options.EV_TYPE = 1
m.options.NODES = 2
m.options.SOLVER = 1
m.solve(disp = True)
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(m.time,valve_frac__0.value,'r-',label='Valve Frac')
plt.legend(); plt.grid(); plt.ylabel('Valve Frac')
plt.subplot(2,1,2)
plt.plot(m.time,co2__ppm_meas,'ko',label='Measured')
plt.plot(m.time,co2__ppm.value,'k--',label='Predicted')
plt.legend(); plt.grid()
plt.xlabel('Time'); plt.ylabel('CO2')
plt.savefig('results.png',dpi=300)
plt.show()
For question B, adjust the code to make the valve position fixed at the measured values and the occupancy determined by the optimizer.
occupancy__p = m.MV(value = occupancy__p)
occupancy__p.STATUS = 1; occupancy__p.FSTATUS = 0
# Strategy I:
valve_frac__0 = m.MV(value = valve_frac__0,lb=0)
valve_frac__0.STATUS = 0; valve_frac__0.FSTATUS = 1
Use occupancy__p.MV_STEP_HOR = 2 or higher decrease the frequency at which the optimized parameter can change (e.g. every 2 hours).

Round negative values to zero in influx query?

I'm trying to set a lower bounds of zero on my influx query result so that negative values are replaced with zero in the result. e.g. For the query:
SELECT x from measurement
If my raw response is
time x
---- -
1632972969471900180 0
1632972969471988621 -130
1632972969472238055 803
then i want to alter the query so that the result is:
time x'
---- -
1632972969471900180 0
1632972969471988621 0
1632972969472238055 803
My solution was to use the ABS absolute value function, adding the absolute value to the original value and dividing by 2. This maps negative values to zero and leaves posiive (and zero) values unchanged. e.g.
SELECT (x + ABS(x)) / 2 from measurement
time x x'
---- - -
1632972969471900180 0 = 0 + 0 / 2 = 0
1632972969471988621 -130 = -130 + 130 / 2 = 0
1632972969472238055 803 = 803 + 803 / 2 = 803

Perceptron : Weight for each sample data , or one common weight

With Perception learning, I am really confused on initializing and updating weight. If I have a sample data that contains 2 inputs x0 and x1 and I have 80 rows of these 2 inputs, hence 80x2 matrix.
Do I need to initialize weight as a matrix of 80x2 or just 2 values w0 and w1 ? Is final goal of perceptron learning is to find 2 weights w0 and w1 which should fit for all 80 input sample rows ?
I have following code and my errors never get to 0, despite going up to 10,000 iterations.
x=input matrix of 80x2
y=output matrix of 80x1
n = number of iterations
w=[0.1,0.1]
learningRate = 0.1
for i in range(n):
expectedT = y.transpose();
xT = x.transpose()
prediction = np.dot (w,xT)
for i in range (len(x)):
if prediction[i] >= 0:
ypred[i] = 1
else:
ypred[i] = 0
error = expectedT - ypred
# updating the weights
w = np.add(w,learningRate*(np.dot(error,x)))
globalError = globalError + np.square(error)
For each feature you will have one weight. Thus you have two features and two weights. It also helps to introduce a bias which adds another weight. For more information about bias check this Role of Bias in Neural Networks. The weights indeed should learn how to fit the sample data best. Depending on the data this can mean that you will never reach error of 0. For example a single layer perceptron can not learn an XOR gate when using a monotonic activation function. (solving XOR with single layer perceptron).
For your example I would recommend two things. Introducing a bias and stopping the training when the error is below a certain threshold or if error is 0 for example.
I completed your example to learn a logical AND gate:
# AND input and output
x = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([0,1,1,1])
n = 1000
w=[0.1,0.1,0.1]
learningRate = 0.01
globalError = 0
def predict(X):
prediction = np.dot(w[0:2],X) + w[2]
ypred = np.zeros(len(y))
for i in range (len(y)):
if prediction[i] >= 0:
ypred[i] = 1
else:
ypred[i] = 0
return ypred
for i in range(n):
expectedT = y.transpose();
xT = x.transpose()
ypred = predict(xT)
error = expectedT - ypred
if sum(error) == 0:
break
# updating the weights
w[0:2] = np.add(w[0:2],learningRate*(np.dot(error,x)))
w[2] += learningRate*sum(error)
globalError = globalError + np.square(error)
After the training the error is 0
print(error)
# [0. 0. 0. 0.]
And the weights are as follows
print(w)
#[0.1, 0.1, -0.00999999999999999]
The perceptron can be used now as AND gate:
predict(x.transpose())
#array([0., 1., 1., 1.])
Hope that helps

logistic regression with gradient descent error

I am trying to implement logistic regression with gradient descent,
I get my Cost function j_theta for the number of iterations and fortunately my j_theta is decreasing when plotted j_theta against the number of iteration.
The data set I use is given below:
x=
1 20 30
1 40 60
1 70 30
1 50 50
1 50 40
1 60 40
1 30 40
1 40 50
1 10 20
1 30 40
1 70 70
y= 0
1
1
1
0
1
0
0
0
0
1
The code that I managed to write for logistic regression using Gradient descent is:
%1. The below code would load the data present in your desktop to the octave memory
x=load('stud_marks.dat');
%y=load('ex4y.dat');
y=x(:,3);
x=x(:,1:2);
%2. Now we want to add a column x0 with all the rows as value 1 into the matrix.
%First take the length
[m,n]=size(x);
x=[ones(m,1),x];
X=x;
% Now we limit the x1 and x2 we need to leave or skip the first column x0 because they should stay as 1.
mn = mean(x);
sd = std(x);
x(:,2) = (x(:,2) - mn(2))./ sd(2);
x(:,3) = (x(:,3) - mn(3))./ sd(3);
% We will not use vectorized technique, Because its hard to debug, We shall try using many for loops rather
max_iter=50;
theta = zeros(size(x(1,:)))';
j_theta=zeros(max_iter,1);
for num_iter=1:max_iter
% We calculate the cost Function
j_cost_each=0;
alpha=1;
theta
for i=1:m
z=0;
for j=1:n+1
% theta(j)
z=z+(theta(j)*x(i,j));
z
end
h= 1.0 ./(1.0 + exp(-z));
j_cost_each=j_cost_each + ( (-y(i) * log(h)) - ((1-y(i)) * log(1-h)) );
% j_cost_each
end
j_theta(num_iter)=(1/m) * j_cost_each;
for j=1:n+1
grad(j) = 0;
for i=1:m
z=(x(i,:)*theta);
z
h=1.0 ./ (1.0 + exp(-z));
h
grad(j) += (h-y(i)) * x(i,j);
end
grad(j)=grad(j)/m;
grad(j)
theta(j)=theta(j)- alpha * grad(j);
end
end
figure
plot(0:1999, j_theta(1:2000), 'b', 'LineWidth', 2)
hold off
figure
%3. In this step we will plot the graph for the given input data set just to see how is the distribution of the two class.
pos = find(y == 1); % This will take the postion or array number from y for all the class that has value 1
neg = find(y == 0); % Similarly this will take the position or array number from y for all class that has value 0
% Now we plot the graph column x1 Vs x2 for y=1 and y=0
plot(x(pos, 2), x(pos,3), '+');
hold on
plot(x(neg, 2), x(neg, 3), 'o');
xlabel('x1 marks in subject 1')
ylabel('y1 marks in subject 2')
legend('pass', 'Failed')
plot_x = [min(x(:,2))-2, max(x(:,2))+2]; % This min and max decides the length of the decision graph.
% Calculate the decision boundary line
plot_y = (-1./theta(3)).*(theta(2).*plot_x +theta(1));
plot(plot_x, plot_y)
hold off
%%%%%%% The only difference is In the last plot I used X where as now I use x whose attributes or features are featured scaled %%%%%%%%%%%
If you view the graph of x1 vs x2 the graph would look like,
After I run my code I create a decision boundary. The shape of the decision line seems to be okay but it is a bit displaced. The graph of the x1 vs x2 with decision boundary is given below:
![enter image description here][2]
Please suggest me where am I going wrong ....
Thanks:)
The New Graph::::
![enter image description here][1]
If you see the new graph the coordinated of x axis have changed ..... Thats because I use x(feature scalled) instead of X.
The problem lies in your cost function calculation and/or gradient calculation, your plotting function is fine. I ran your dataset on the algorithm I implemented for logistic regression but using the vectorized technique because in my opinion it is easier to debug.
The final values I got for theta were
theta =
[-76.4242,
0.8214,
0.7948]
I also used alpha = 0.3
I plotted the decision boundary and it looks fine, I would recommend using the vectorized form as it is easier to implement and to debug in my opinion.
I also think your implementation of gradient descent is not quite correct. 50 iterations is just not enough and the cost at the last iteration is not good enough. Maybe you should try to run it for more iterations with a stopping condition.
Also check this lecture for optimization techniques.
https://class.coursera.org/ml-006/lecture/37

Most significant samples that capture 80% of energy in Signal Processing

Is there any algorithm for the peak finder in numeric signal? For example if we have:
x = [ 0.01 -0.5 0.02 0.5 0.003 0.8 1 0 0 0 -1 0 0.0001 0 0 0 ]
As you see we have 5 peak sample:
( -0.5 , 0.5 , 0.8 , 1 , -1 )
These peaks must capture 80 % of energy of signal x?
In the comments you finally mention you want to do this in Matlab.
Well, have this Pseudo-code instead:
power = x .^2 #element-wise squaring
total = sum(power)
sorted, indices = sort(power) # sort and note down permutation indices
max_indices = []
max_sum = 0
while sum < total * 0.8:
currentpower = pop(sorted) # get the highest power
currentindex = pop(indices)
max_sum+=currentpower
max_indices.append(currentindex)
return max_indices

Resources