Normally learning rate is a value that we decide in the begining and normally it doesn't change with no of iterations.
But in SOM learning rate is change with the iteration, what is the idea behind that?
As I understand learning rate should be decrease with the number of iterations. why is that?
Reason is quite simple. SOMs are ill-designed in terms of mathematical models, and one needs to decrease learning rate in order to ensure convergence. In other words, if you do not change this value, learning procedure might not stop at all. This issue is somehow addressed by more mathematical models called "Principal Curves" and "Principal Manifolds", which are much less popular but introduce valid mathematical approach for learning SOM-like representations.
Related
What could be the reason for this?
There is not any guarantee that Bayesian optimization will provide optimal hyperparameter values; quoting from the definitive textbook Deep Learning, by Goodfellow, Bengio, and Courville (page 430):
Currently, we cannot unambiguously recommend Bayesian hyperparameter
optimization as an established tool for achieving better deep learning results or
for obtaining those results with less effort. Bayesian hyperparameter optimization
sometimes performs comparably to human experts, sometimes better, but fails
catastrophically on other problems. It may be worth trying to see if it works on a
particular problem but is not yet sufficiently mature or reliable.
In other words, it is actually just a heuristic (like grid search), and what you report does not necessarily mean that you are doing something wrong or that there is a problem with the procedure to be corrected...
I would like to extend a perfect #desertnaut answer by a small intuition what could go wrong and how one can improve Bayesian optimization. Bayesian optimization usually use some form of computation of distance (and correlation) between points (hyperparameters). Unfortunately - usually it is close to impossible to impose such geometrical structure on the parameter space. One of important issues connected to this problem is to impose a Lipshitz or linear dependency between optimized value and hyperparameters. To understand that in more details let us have a look at:
Integer(50, 1000, name="estimators")
parameter. Let us inspect how adding 100 estimators could change the behavior of optimization problem. If we add 100 estimators to 50 - we will triple the number of estimators and would probably significantly increase the expressive power. How ever changing from 900 to 1000 should not be as important. So if the optimization process start with - let's say 600 estimators as a first guess - it would notice that changing estimators by approximately 50 is not changing a lot so it would skip optimizing this hyper-parameter (as it assumes quasi continuous-linear dependency). This might seriously harm the exploration process.
In order to overcome this issue it is better to use some sort of log distribution for this parameter. Similar trick was applied e.g. to learning_rate parameter.
I am working on a problem for which we aim to solve with deep Q learning. However, the problem is that training just takes too long for each episode, roughly 83 hours. We are envisioning to solve the problem within, say, 100 episode.
So we are gradually learning a matrix (100 * 10), and within each episode, we need to perform 100*10 iterations of certain operations. Basically we select a candidate from a pool of 1000 candidates, put this candidate in the matrix, and compute a reward function by feeding the whole matrix as the input:
The central hurdle is that the reward function computation at each step is costly, roughly 2 minutes, and each time we update one entry in the matrix.
All the elements in the matrix depend on each other in the long term, so the whole procedure seems not suitable for some "distributed" system, if I understood correctly.
Could anyone shed some lights on how we look at the potential optimization opportunities here? Like some extra engineering efforts or so? Any suggestion and comments would be appreciated very much. Thanks.
======================= update of some definitions =================
0. initial stage:
a 100 * 10 matrix, with every element as empty
1. action space:
each step I will select one element from a candidate pool of 1000 elements. Then insert the element into the matrix one by one.
2. environment:
each step I will have an updated matrix to learn.
An oracle function F returns a quantitative value range from 5000 ~ 30000, the higher the better (roughly one computation of F takes 120 seconds).
This function F takes the matrix as the input and perform a very costly computation, and it returns a quantitative value to indicate the quality of the synthesized matrix so far.
This function is essentially used to measure some performance of system, so it do takes a while to compute a reward value at each step.
3. episode:
By saying "we are envisioning to solve it within 100 episodes", that's just an empirical estimation. But it shouldn't be less than 100 episode, at least.
4. constraints
Ideally, like I mentioned, "All the elements in the matrix depend on each other in the long term", and that's why the reward function F computes the reward by taking the whole matrix as the input rather than the latest selected element.
Indeed by appending more and more elements in the matrix, the reward could increase, or it could decrease as well.
5. goal
The synthesized matrix should let the oracle function F returns a value greater than 25000. Whenever it reaches this goal, I will terminate the learning step.
Honestly, there is no effective way to know how to optimize this system without knowing specifics such as which computations are in the reward function or which programming design decisions you have made that we can help with.
You are probably right that the episodes are not suitable for distributed calculation, meaning we cannot parallelize this, as they depend on previous search steps. However, it might be possible to throw more computing power at the reward function evaluation, reducing the total time required to run.
I would encourage you to share more details on the problem, for example by profiling the code to see which component takes up most time, by sharing a code excerpt or, as the standard for doing science gets higher, sharing a reproduceable code base.
Not a solution to your question, just some general thoughts that maybe are relevant:
One of the biggest obstacles to apply Reinforcement Learning in "real world" problems is the astoundingly large amount of data/experience required to achieve acceptable results. For example, OpenAI in Dota 2 game colletected the experience equivalent to 900 years per day. In the original Deep Q-network paper, in order to achieve a performance close to a typicial human, it was required hundres of millions of game frames, depending on the specific game. In other benchmarks where the input are not raw pixels, such as MuJoCo, the situation isn't a lot better. So, if you don't have a simulator that can generate samples (state, action, next state, reward) cheaply, maybe RL is not a good choice. On the other hand, if you have a ground-truth model, maybe other approaches can easily outperform RL, such as Monte Carlo Tree Search (e.g., Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning or Simple random search provides a competitive approach to reinforcement learning). All these ideas a much more are discussed in this great blog post.
The previous point is specially true for deep RL. The fact of approximatting value functions or policies using a deep neural network with millions of parameters usually implies that you'll need a huge quantity of data, or experience.
And regarding to your specific question:
In the comments, I've asked a few questions about the specific features of your problem. I was trying to figure out if you really need RL to solve the problem, since it's not the easiest technique to apply. On the other hand, if you really need RL, it's not clear if you should use a deep neural network as approximator or you can use a shallow model (e.g., random trees). However, these questions an other potential optimizations require more domain knowledge. Here, it seems you are not able to share the domain of the problem, which could be due a numerous reasons and I perfectly understand.
You have estimated the number of required episodes to solve the problem based on some empirical studies using a smaller version of size 20*10 matrix. Just a caution note: due to the curse of the dimensionality, the complexity of the problem (or the experience needed) could grow exponentially when the state space dimensionalty grows, although maybe it is not your case.
That said, I'm looking forward to see an answer that really helps you to solve your problem.
I am using Adam method in caffe. It has a delta/epsilon tuning parameter (used to avoid divide by zero). In caffe, its default value is 1e-8. I can change it to 1e-6 or 1-e0. From tensorflow, I hear that this parameter will affect to performance of training, especially limited dataset.
The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1.
If anyone has experimented with changing this parameter, please give me some advice about the effect of this parameter on performance?
Consider the update equation for Adam: epsilon is to prevent dividing by zero in the case that (the exponentially-decaying average of) the standard deviation of the gradients is zero.
Why would a low value of epsilon cause problems? Perhaps there are cases where some parameters settle to good values before others and having epsilon too low means those parameters get huge learning rates and get pushed away from those good values. I'd guess this would be more problematic in something like a resnet where a lot of the layers have little effect on a large portion of the examples.
On the other hand, setting epsilon higher both limits the parameter-wise learning rate effect and reduces all the learning rates, slowing down training. It's possible to find examples of higher values of epsilon helping simply because the learning rate was too high to begin with.
Recall that when exponentially decaying the learning rate in TensorFlow one does:
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
the docs mention this staircase option as:
If the argument staircase is True, then global_step /decay_steps is an
integer division and the decayed learning rate follows a staircase
function.
when is it better to decay every X number of steps and follow at stair case function rather than a smoother version that decays more and more with every step?
The existing answers didn't seem to describe this. There are two different behaviors being described as 'staircase' behavior.
From the feature request for staircase, the behavior is described as being a hand-tuned piecewise constant decay rate, so that a user could provide a set of iteration boundaries and a set of decay rates, to have the decay rate jump to the specified value after the iterations pass a given boundary.
If you look into the actual code for this feature pull request, you'll see that the PR isn't related much to the staircase option in the function arguments. Instead, it defines a wholly separate piecewise_constant operation, and the associated unit test shows how to define your own custom learning rate as a piecewise constant with learning_rate_decay.piecewise_constant.
From the documentation on decaying the learning rate, the behavior is described as treating global_step / decay_steps as integer division, so for the first set of decay_steps steps, the division results in 0, and the learning rate is constant. Once you cross the decay_steps-th iteration, you get the decay rate raised to a power of 1, then a power of 2, etc. So you only observe decay rates at the particular powers, rather than smoothly varying across all the powers if you treated the global step as a float.
As to advantages, this is just a hyperparameter decision you should make based on your problem. Using the staircase option allows you hold a decay rate constant, essentially like maintaining a higher temperature in simulated annealing for a longer time. This can allow you explore more of the solution space by taking bigger strides in the gradient direction, at the cost of possible noisy or unproductive updates. Meanwhile, smoothly increasing the decay rate power will steadily "cool" the exploration, which can limit you by making you stuck near a local optimum, but it can also prevent you from wasting time with noisily large gradient steps.
Whether one approach or the other is better (a) often doesn't matter very much and (b) usually needs to be specially tuned in the cases when it might matter.
Separately, as the feature request link mentions, the piecewise constant operation seems to be for very specifically tuned use cases, when you have separate evidence in favor of a hand-tuned decay rate based on collecting training metrics as a function of iteration. I would generally not recommend that for general use.
Good question.
For all I know it is preference of the research group.
Back from the old times, it was computationally more efficient to reduce the learning rate only every epoch. That's why some people prefer to use it nowadays.
Another, hand-wavy, story that people may tell is it prevents from local optima. By "suddenly" changing the learning rate, the weights might jump to a better bassin. (I don;t agree with this, but add it for completeness)
I am working on a basic decision making algorithm, i.e. based on the time of a parallel loop iteration, a decision is made to either increase or decrease the amount of threads assigned to a process. My initial approach was to take the average time of ten iterations and compare it to the previous (average) time, every 5secs. This approach failed... left by itself it would always drive the thread count down to 1.
So i've turned to unsupervised learning, using clustering as a way to decide if time x, should be classified into either: increase, stick with, or decrease the amount of threads to assign.
Based on the data type i am classifying, I believe K-means is a good starting point for unsupervised learning? I am on the right track here...
If you have an objective, use supervised learning.
Unsupervised methods can cluster by some kind of objective. You have no control to have k-means cluster points according to this objective (e.g. "increase, stick with, or decrease"). Instead, k-means may yield clusters that have no relationship to this at all!
Try labeling some data (which should be fairly easy in retrospective, i.e. "should I have increased the number of threads at t minus 10") and then training a classifier on that.