small learning rate vs big learning rate - machine-learning

I'm new in ML. While I was reading about backpropagation Suddenly, I have question.
In backpropagation learning of a neural network, Do we should start with a small learning rate and slowly increase it during the learning process? or Do we should start with big learning rate and slowly reduce it during the learning process?
Which one is correct?

Generally, the second one it correct -
Think about this in this way - big learning rate means that you roughly search for the best area in the space. Then, with a small learning you tune the weights to find the best value.
If you would use constant big learning rate you would "jump" around the minimum point. If you would use constant small learning rate it would take a lot of time to converge. That`s why learning rate decaying is a good idea.
Having said that, there are a few more advanced tricks for learning rate scheduling, that are not monotonically decreasing the learning rate.

Related

How learning rate influences gradient descent?

When gradient descent quantitatively suggests by much the biases and weights to be reduced, what does learning rate is doing?? Am a beginner, someone please enlighten me on this.
Learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect the loss gradient. The lower the value, the slower we travel along the downward slope. While this might be a good idea (using a low learning rate) in terms of making sure that we do not miss any local minima, it could also mean that we’ll be taking a long time to converge — especially if we get stuck on a plateau region.
new_weight = existing_weight — learning_rate * gradient
If learning rate is too small gradient descent can be slow
If learning rate is fast gradient descent can overshoot the minimum.It may fail to converge, it may even diverge

How to interpret the results of a training/validation learning curve?

I am using the Random Forest classifier in the Scikit package and have plotted F1 scores versus training set size. The red is the training set F1 scores and the green is the scores for the validation set. This is about what I expected but I would like some advice on interpretation.
I see that there is some significant variance, yet the validation curve appears to be converging. Should I assume that adding data would do little to affect the variance given the convergence or am I jumping to conclusion about the rate of convergence?
Is the amount of variance here significant enough to warrant taking further actions that may increase the bias slightly? I realize this is a fairly domain-specific question but I wonder if there is any general guidelines for how much variance is worth a bit of bias tradeoff?
I see that there is some significant variance, yet the validation curve appears to be converging. Should I assume that adding data would do little to affect the variance given the convergence or am I jumping to conclusion about the rate of convergence?
This seems true conditioning on your learning procedure, thus in particular - selection of hyperparameters. Thus it does not mean that given different set of hyperparameters the same effect would occur. It only seems that given current setting - rate of convergence is relatively small thus getting to 95% would probably require significant amounts of data.
Is the amount of variance here significant enough to warrant taking further actions that may increase the bias slightly? I realize this is a fairly domain-specific question but I wonder if there is any general guidelines for how much variance is worth a bit of bias tradeoff?
Yes, in general - these kind of curves at least do not reject option to go for higher bias. You clearly overfit towards training set. On the other hand, trees usually do that, thus increasing bias might be hard without changing the model. One option that I would suggest is going for Extremely Randomized Trees, which is nearly the same as Random Forest, but with randomly chosen threshold instead of full optimization. They have significantly bigger bias and should take these curves a bit closer to each other.
Obviously there is no guarantee - as you said, this is data specific, but the general characteristic looks promising (however might require changing the model).

Will larger batch size make computation time less in machine learning?

I am trying to tune the hyper parameter i.e batch size in CNN.I have a computer of corei7,RAM 12GB and i am training a CNN network with CIFAR-10 dataset which can be found in this blog.Now At first what i have read and learnt about batch size in machine learning:
let's first suppose that we're doing online learning, i.e. that we're
using a mini­batch size of 1. The obvious worry about online learning
is that using mini­batches which contain just a single training
example will cause significant errors in our estimate of the gradient.
In fact, though, the errors turn out to not be such a problem. The
reason is that the individual gradient estimates don't need to be
super­accurate. All we need is an estimate accurate enough that our
cost function tends to keep decreasing. It's as though you are trying
to get to the North Magnetic Pole, but have a wonky compass that's
10­-20 degrees off each time you look at it. Provided you stop to
check the compass frequently, and the compass gets the direction right
on average, you'll end up at the North Magnetic Pole just
fine.
Based on this argument, it sounds as though we should use online
learning. In fact, the situation turns out to be more complicated than
that.As we know we can use matrix techniques to compute the gradient
update for all examples in a mini­batch simultaneously, rather than
looping over them. Depending on the details of our hardware and linear
algebra library this can make it quite a bit faster to compute the
gradient estimate for a mini­batch of (for example) size 100 , rather
than computing the mini­batch gradient estimate by looping over the
100 training examples separately. It might take (say) only 50 times as
long, rather than 100 times as long.Now, at first it seems as though
this doesn't help us that much.
With our mini­batch of size 100 the learning rule for the weights
looks like:
where the sum is over training examples in the mini­batch. This is
versus for online learning.
Even if it only takes 50 times as long to do the mini­batch update, it
still seems likely to be better to do online learning, because we'd be
updating so much more frequently. Suppose, however, that in the
mini­batch case we increase the learning rate by a factor 100, so the
update rule becomes
That's a lot like doing separate instances of online learning with a
learning rate of η. But it only takes 50 times as long as doing a
single instance of online learning. Still, it seems distinctly
possible that using the larger mini­batch would speed things up.
Now i tried with MNIST digit dataset and ran a sample program and set the batch size 1 at first.I noted down the training time needed for the full dataset.Then i increased the batch size and i noticed that it became faster.
But in case of training with this code and github link changing the batch size doesn't decrease the training time.It remained same if i use 30 or 128 or 64.They are saying that they got 92% accuracy.After two or three epoch they have got above 40% accuracy.But when i ran the code in my computer without changing anything other than the batch size i got worse result after 10 epoch like only 28% and test accuracy stuck there in the next epochs.Then i thought since they have used batch size of 128 i need to use that.Then i used the same but it became more worse only give 11% after 10 epoch and stuck in there.Why is that??
Neural networks learn by gradient descent an error function in the weight space which is parametrized by the training examples. This means the variables are the weights of the neural network. The function is "generic" and becomes specific when you use training examples. The "correct" way would be to use all training examples to make the specific function. This is called "batch gradient descent" and is usually not done for two reasons:
It might not fit in your RAM (usually GPU, as for neural networks you get a huge boost when you use the GPU).
It is actually not necessary to use all examples.
In machine learning problems, you usually have several thousands of training examples. But the error surface might look similar when you only look at a few (e.g. 64, 128 or 256) examples.
Think of it as a photo: To get an idea of what the photo is about, you usually don't need a 2500x1800px resolution. A 256x256px image will give you a good idea what the photo is about. However, you miss details.
So imagine gradient descent to be a walk on the error surface: You start on one point and you want to find the lowest point. To do so, you walk down. Then you check your height again, check in which direction it goes down and make a "step" (of which the size is determined by the learning rate and a couple of other factors) in that direction. When you have mini-batch training instead of batch-training, you walk down on a different error surface. In the low-resolution error surface. It might actually go up in the "real" error surface. But overall, you will go in the right direction. And you can make single steps much faster!
Now, what happens when you make the resolution lower (the batch size smaller)?
Right, your image of what the error surface looks like gets less accurate. How much this affects you depends on factors like:
Your hardware/implementation
Dataset: How complex is the error surface and how good it is approximated by only a small portion?
Learning: How exactly are you learning (momentum? newbob? rprop?)
I'd like to add to what's been already said here that larger batch size is not always good for generalization. I've seen these cases myself, when an increase in batch size hurt validation accuracy, particularly for CNN working with CIFAR-10 dataset.
From "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima":
The stochastic gradient descent (SGD) method and its variants are
algorithms of choice for many Deep Learning tasks. These methods
operate in a small-batch regime wherein a fraction of the training
data, say 32–512 data points, is sampled to compute an approximation
to the gradient. It has been observed in practice that when using a
larger batch there is a degradation in the quality of the model, as
measured by its ability to generalize. We investigate the cause for
this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods
tend to converge to sharp minimizers of the training and testing
functions—and as is well known, sharp minima lead to poorer
generalization. In contrast, small-batch methods consistently converge
to flat minimizers, and our experiments support a commonly held view
that this is due to the inherent noise in the gradient estimation. We
discuss several strategies to attempt to help large-batch methods
eliminate this generalization gap.
Bottom-line: you should tune the batch size, just like any other hyperparameter, to find an optimal value.
The 2018 opinion retweeted by Yann LeCun is the paper Revisiting Small Batch Training For Deep Neural Networks, Dominic Masters and Carlo Luschi suggesting a good generic maximum batch size is:
32
With some interplay with choice of learning rate.
The earlier 2016 paper On Large-batch Training For Deep Learning: Generalization Gap And Sharp Minima gives some reason for not using big batches, which I paraphrase badly, as big batches are likely to get stuck in local (“sharp”) minima, small batches not.

Neural network and batch learning

I am new to neural networks and would like to find out when am I supposed to reduce the learning rate as opposed to the batch size.
I would understand that if the learning diverges, the learning rate would have to be reduced.
However, when do I reduce or increase the batch size? My guess is that if the loss fluctuates too much, it would be ideal to reduce the batch size?
If you increase the batch size, the gradient is more likely to point towards the right direction so that the (overall) error decreases. Especially compared to updating the weights after considering only a single example which results in a very random and noisy gradient.
Therefore, if the loss function fluctuates, you can do both: increase the batch size and decrease the learning rate. The drawback of a larger batch size is the higher computational cost per update. So if the training takes too long, see if it still converges with a smaller batch size.
You can read more here or here. (Btw, https://stats.stackexchange.com/ is more suitable for theoretical questions which do not contain specific code implementations)
The "correct" way to learn is to use all of your training data for every single step in your gradient descent. However, this takes bit of time to compute it as this is a really heavy function which is - most of the time - parameterized by the thousands of training examples.
The idea is that the error function / weight update looks similar enough when you leave a couple of your training examples out. This speeds the calculateion of the error function up. However, the drawback is that you might not go in the correct direction with some gradient descent steps. But it should be "almost" correct in most cases.
So the rationale is that even if you don't go completely in the correct direction, you can do a lot more steps in the same time so that it doesn't matter.
A very common choice for the mini-batch size is 128 or 256. The most extreme choice is usually called "stochastic gradient descent" and uses only 1 training example.
As so often in ML, it is a good idea to just try different values.

Learning rate of a Q learning agent

The question how the learning rate influences the convergence rate and convergence itself.
If the learning rate is constant, will Q function converge to the optimal on or learning rate should necessarily decay to guarantee convergence?
Learning rate tells the magnitude of step that is taken towards the solution.
It should not be too big a number as it may continuously oscillate around the minima and it should not be too small of a number else it will take a lot of time and iterations to reach the minima.
The reason why decay is advised in learning rate is because initially when we are at a totally random point in solution space we need to take big leaps towards the solution and later when we come close to it, we make small jumps and hence small improvements to finally reach the minima.
Analogy can be made as: in the game of golf when the ball is far away from the hole, the player hits it very hard to get as close as possible to the hole. Later when he reaches the flagged area, he choses a different stick to get accurate short shot.
So its not that he won't be able to put the ball in the hole without choosing the short shot stick, he may send the ball ahead of the target two or three times. But it would be best if he plays optimally and uses the right amount of power to reach the hole. Same is for decayed learning rate.
The learning rate must decay but not too fast.
The conditions for convergence are the following (sorry, no latex):
sum(alpha(t), 1, inf) = inf
sum(alpha(t)^2, 1, inf) < inf
Something like alpha = k/(k+t) can work well.
This paper discusses exactly this topic:
http://www.jmlr.org/papers/volume5/evendar03a/evendar03a.pdf
It should decay otherwise there will be some fluctuations provoking small changes in the policy.

Resources