Strange Loss function behaviour when training CNN - machine-learning

I'm trying to train my network on MNIST using a self-made CNN (C++).
It gives enough good results when I use a simple model, like:
Convolution (2 feature maps, 5x5) (Tanh) -> MaxPool (2x2) -> Flatten -> Fully-Connected (64) (Tanh) -> Fully-Connected (10) (Sigmoid).
After 4 epochs, it behaves like here 1.
After 16 epochs, it gives ~6,5% error on a test dataset.
But in the case of 4 feature maps in Conv, the MSE value isn't improving, sometimes even increasing 2,5 times 2.
The online training mode is used, with help of Adam optimizer (alpha: 0.01, beta_1: 0.9, beta_2: 0.999, epsilon: 1.0e-8). It is calculated as:
double AdamOptimizer::calc(int t, double& m_t, double& v_t, double g_t)
{
m_t = this->beta_1 * m_t + (1.0 - this->beta_1) * g_t;
v_t = this->beta_2 * v_t + (1.0 - this->beta_2) * (g_t * g_t);
double m_t_aver = m_t / (1.0 - std::pow(this->beta_1, t + 1));
double v_t_aver = v_t / (1.0 - std::pow(this->beta_2, t + 1));
return -(this->alpha * m_t_aver) / (std::sqrt(v_t_aver) + this->epsilon);
}
So, can be this problem caused by lack of some additional learning techniques (dropout, batch-normalization), or wrongly set parameters? Or it is caused by some implementation issues?
P. S. I provide a github link if necessary.

Try to decrease the learning rate.

Related

Denormalizing thetas after a linear regression with gradient descent

I have the following set of data:
km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290
After normalizing them, I'm performing a gradient descent that gives me the following thetas:
θ0 = 0.9362124793084768
θ1 = -0.9953762249792935
I can correctly predict the price if I feed a normalized mileage, and then denormalize the predicted price, ie:
Asked price for a mileage of 50000km:
normalized mileage: 0.12483129971764294
normalized price: (mx + c) = 0.8119583714362707
real price: 7417.486843464296
What I'm looking for is to revert my thetas back to their non-normalized values, but I've been unable to, no matter which equation I tried. Is there a way to do so ?
As usual, it takes me asking a question on stackoverflow to manage to solve it by myself moments later....
It was simply a two variables equation to solve, as you can see here (excuse the handwriting): https://ibb.co/178qWcQ.
Here is the python code that does the computation:
x0, x1 = self.training_set[0][0], self.training_set[1][0]
x0n, x1n = self.normalized_training_set[0][0], self.normalized_training_set[1][0]
y0n, y1n = self.hypothesis(x0n), self.hypothesis(x1n)
p_diff = self.max_price - self.min_price
theta0 = (x1 / (x1 - x0)) * (y0n * p_diff + self.min_price - (x0 / x1 * (y1n * p_diff + self.min_price)))
y0 = self.training_set[0][1]
theta1 = (y0 - theta0) / x0
print(theta0, theta1) //RESULT: 8481.172796984529 -0.020129886654102203

How to implement custom logloss with identical behavior to binary objective in LightGBM?

I am trying to implement my own loss function for binary classification. To get started, I want to reproduce the exact behavior of the binary objective. In particular, I want that:
The loss of both functions have the same scale
The training and validation slope is similar
predict_proba(X) returns probabilities
None of this is the case for the code below:
import sklearn.datasets
import lightgbm as lgb
import numpy as np
X, y = sklearn.datasets.load_iris(return_X_y=True)
X, y = X[y <= 1], y[y <= 1]
def loglikelihood(labels, preds):
preds = 1. / (1. + np.exp(-preds))
grad = preds - labels
hess = preds * (1. - preds)
return grad, hess
model = lgb.LGBMClassifier(objective=loglikelihood) # or "binary"
model.fit(X, y, eval_set=[(X, y)], eval_metric="binary_logloss")
lgb.plot_metric(model.evals_result_)
With objective="binary":
With objective=loglikelihood the slope is not even smooth:
Moreover, sigmoid has to be applied to model.predict_proba(X) to get probabilities for loglikelihood (as I have figured out from https://github.com/Microsoft/LightGBM/issues/2136).
Is it possible to get the same behavior with a custom loss function? Does anybody understand where all these differences come from?
Looking at the output of model.predict_proba(X) in each case, we can see that the built-in binary_logloss model returns probabilities, while the custom model returns logits.
The built-in evaluation function takes probabilities as input. To fit the custom objective, we need a custom evaluation function which will take logits as input.
Here is how you could write this. I've changed the sigmoid calculation so that it doesn't overflow if logit is a large negative number.
def loglikelihood(labels, logits):
#numerically stable sigmoid:
preds = np.where(logits >= 0,
1. / (1. + np.exp(-logits)),
np.exp(logits) / (1. + np.exp(logits)))
grad = preds - labels
hess = preds * (1. - preds)
return grad, hess
def my_eval(labels, logits):
#numerically stable logsigmoid:
logsigmoid = np.where(logits >= 0,
-np.log(1 + np.exp(-logits)),
logits - np.log(1 + np.exp(logits)))
loss = (-logsigmoid + logits * (1 - labels)).mean()
return "error", loss, False
model1 = lgb.LGBMClassifier(objective='binary')
model1.fit(X, y, eval_set=[(X, y)], eval_metric="binary_logloss")
model2 = lgb.LGBMClassifier(objective=loglikelihood)
model2.fit(X, y, eval_set=[(X, y)], eval_metric=my_eval)
Now the results are the same.

Understanding code wrt Logistic Regression using gradient descent

I was following Siraj Raval's videos on logistic regression using gradient descent :
1) Link to longer video :
https://www.youtube.com/watch?v=XdM6ER7zTLk&t=2686s
2) Link to shorter video :
https://www.youtube.com/watch?v=xRJCOz3AfYY&list=PL2-dafEMk2A7mu0bSksCGMJEmeddU_H4D
In the videos he talks about using gradient descent to reduce the error for a set number of iterations so that the function converges(slope becomes zero).
He also illustrates the process via code. The following are the two main functions from the code :
def step_gradient(b_current, m_current, points, learningRate):
b_gradient = 0
m_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
b_gradient += -(2/N) * (y - ((m_current * x) + b_current))
m_gradient += -(2/N) * x * (y - ((m_current * x) + b_current))
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
return [new_b, new_m]
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
b = starting_b
m = starting_m
for i in range(num_iterations):
b, m = step_gradient(b, m, array(points), learning_rate)
return [b, m]
#The above functions are called below:
learning_rate = 0.0001
initial_b = 0 # initial y-intercept guess
initial_m = 0 # initial slope guess
num_iterations = 1000
[b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
# code taken from Siraj Raval's github page
Why does the value of b & m continue to update for all the iterations? After a certain number of iterations, the function will converge, when we find the values of b & m that give slope = 0.
So why do we continue iteration after that point and continue updating b & m ?
This way, aren't we losing the 'correct' b & m values? How is learning rate helping the convergence process if we continue to update values after converging? Thus, why is there no check for convergence, and so how is this actually working?
In practice, most likely you will not reach to slope 0 exactly. Thinking of your loss function as a bowl. If your learning rate is too high, it is possible to overshoot over the lowest point of the bowl. On the contrary, if the learning rate is too low, your learning will become too slow and won't reach the lowest point of the bowl before all iterations are done.
That's why in machine learning, the learning rate is an important hyperparameter to tune.
Actually, once we reach a slope 0; b_gradient and m_gradient will become 0;
thus, for :
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
new_b and new_m will remain the old correct values; as nothing will be subtracted from them.

Is there an easy way to implement a Optimizer.Maximize() function in TensorFlow

There are several experiments that rely on gradient ascent rather than gradient descent. I have looked into some approaches to using "cost" and the minimize function to simulate the "maximize" function, but I am still not certain I know how to properly implement a maximize() function. Also, in most of these cases, I would say they are closer to an unsupervised learning. So given this code concept for a cost function:
cost = (Yexpected - Ycalculated)^2
train_step = tf.train.AdamOptimizer(0.5).minimize(cost)
I would like to write something were I am following the positive gradient and there may not be a Yexpected value:
maxMe = Function(Ycalculated)
train_step = tf.train.AdamOptimizer(0.5).maximize(maxMe)
A good example of this need is "http://cs229.stanford.edu/proj2009/LvDuZhai.pdf" with Recurrent Reinforcement Learning.
I have read a few papers and references that state changing the sign will flip the direction of movement to increasing gradient, but given TensorFlow's internal calculation of the gradient, I am not sure if this will work to Maximize as I don't know of a way to validate the results:
maxMe = Function(Ycalculated)
train_step = tf.train.AdamOptimizer(0.5).minimize( -1 * maxMe )
The intuition is simple, the minimize() function keeps squashing the given value, for example, if you start with 5, then for every iteration (for example and depending on the learning rate), the value will become say, 4, then 3, then 2, 1, 0 and so on if possible to bring it down more. Now if you pass -5 at the beginning (which is in fact a +5 but you changed the sign explicitly), the gradient will try to change the parameters to bring the number down more, as for example, -5, -6, -7, -8, ...etc. But in fact, the function is increasing because we changed the sign, and the actual sign is (+). In other words, the gradient, in the latter case, is changing the parameters of the neural network in a way that maximizes the function, not minimizing it.
Toy example with arbitrary numbers:
The input x = 1.5, The weight parameter at time (t) w_t = 0.1,
The observed response y = 3.0, The learning rate lr = 0.1.
x * w = 0.15 (this is y predicted for the current w)
loss function = (3.0 - 0.15)^2 = 8.1
Applying gradient descent:
w_(t+1) = w_t - lr * (derivative of loss function with respect to w)
w_(t+1) = 0.1 - (0.1 * [1.5 * 2(0.15 - 3.0)]) = 0.1 - (-0.855) = 0.955
If we use the new w_(t+1) we will have:
1.5 * 0.955 = 1.49 (which is closer to the correct answer 3.0)
and the new loss is: (3.0 - 1.49)^2 = 2.27 (smaller error).
If we keep iterating, we will adjust w to a value that gives us the minimum cost possible.
Now lets repeat the same experiment but with the sign flipped to negative:
loss function = - (3.0 - 0.15)^2 = -8.1
Applying gradient descent:
w_(t+1) = w_t - lr * (derivative of loss function with respect to w)
w_(t+1) = 0.1 - (0.1 * [1.5 * -2(0.15 - 3.0)]) = 0.1 - 0.855 = −0.755
If we apply the new w_(t+1) we will have:
1.5 * −0.755 = −1.1325 and the new loss is: (3.0 - (-1.1325))^2 = 17.07
(the loss function is maximizing!).
That is also applicable to any differentiable function, but this is just a simple naive example to demonstrate the idea.
So, you can do, as you suggested already:
optimizer.minimize( -1 * value)
Or if you like, create a wrapper function (which in fact is needless, but just to mention it):
def maximize(optimizer, value, **kwargs):
return optimizer.minimize(-value, **kwargs)

How to calculate MAPE for Training/Test set in application of Neural Network in MATLAB efficiently?

I've been using MATLAB for my time series dataset (for an electricity dataset) as a part of my course. It consists of 40,000+ samples. After the formation of neural network, I wanted to test its accuracy. I've been curious more on MAPE(mean absolute percentage error) and RMS(Root Mean Square) errors. To calculate them, I've used following lines of code.
mape_res = zeros(N_TRAIN);
mse_res = zeros(N_TRAIN);
for i_train = 1:N_TRAIN
Inp = inputs_consumption(i_train );
Actual_Output = targets_consumption( i_train + 1 );
Observed_Output = sim( ann, Inp );
mape_res(i_train) = abs(Observed_Output - Actual_Output)/Actual_Output;
mse_res(i_train) = Observed_Output - Actual_Output;
end
mape = sum(mape_res)/N_TRAIN;
mse = sum(power(mse_res,2))/N_TRAIN;
sprintf( 'The MSE on training is %g', mse )
sprintf( 'The MAPE on training is %g', mape )
The problem with above coding is that, for a large dataset(40K samples), it takes almost 15 minutes to iterate through all those loops and it's quite a long waiting for getting result for the error rate; Isn't there any other efficient way to calculate them?
You could always do a rolling average that gets updated each iteration, as follows:
mape_res = abs(Observed_Output - Actual_Output) / Actual_Output;
mse_res = Observed_Output - Actual_Output;
alpha = 1 / i_train;
mape = mape * (1 - alpha) + mape_res * alpha;
mse = mes * (1 - alpha) + power(mse_res,2) * alpha;
Then you could either display the resulting values each iteration, use them for stopping criteria if the desired error rate is reached, or both. This also has the added benefit of not requiring the initialization and population of the mape_res and mse_res vectors unless they happen to be needed elsewhere...
Edit: Do make sure to initialize the mape and mse values to zero prior to entering the for loop :)

Resources