Deep learning has been seen as a rebranding of Neural Networks.
Were the issues presented in the paper "Neural Networks and the Bias/Variance Dilemma" by Stuart Geman ever resolved in the architectures in use today?
We learned a lot about NN, in particular:
we now learn better representations due to progress in unsupervised/autoregressive learning, such as restricted boltzman machines, autoencoders, denoising autoencoders, variational autoencoders, which help as stabilize the process, learn from reasonable representations
we have better priors - not neceserly in the strict probabilistic sense, but we know, that for example in image processing a good architecture is the convolutional one, thus we have a smaller (in terms of parameters), but better suited for the problem - models. Consequently we are less prone to overfitting.
we have better optimization techniques and activation functions - which help us with underfitting (we can learn larger networks), in particular - we can learn deeper networks. Why is deep often better then wide? Because again - this is another prior, the assumption that representation should be hierarchical, and it seems to be valid prior for many modern problems (even that not all of them).
dropout, and other techniques brought as better regularization methods (than previously known and used simple weights priors) - which again limits problem with overfitting (variance).
There are many more things that changed, but in general - we were simply able to find better architectures, better assumptions, thus we now search in more narrow class of hypotheses. Consequently - we overfit less (variance), and underfit less (bias) - yet there is still lots to be done!
Next thing is, as #david pointed out, amount of data. We have huge datasets now, we often have access to more data that we can process in a reasonable time, and obviously more data means less variance - even highly overfitting models start to behave well.
Last, but not least - hardware. This is something that every single deep learning expert will tell you - our computers got stronger. We still use the same algorithms, the same architectures (with many little tweaks, but the core is the same), but our hardware is exponentially faster, and this changes a lot.
#lejlot gave a good overview. I want to point to two specific parts of the whole process.
First, neural networks are universal approximators. That means, their bias in principle can be made arbitrarily small. The problem that was rather thought to be severe was overfitting -- too large variance.
Now, a common and successful way in Machine Learning to deal with too large variance is by "averaging it away" over many different predictions -- which should be as uncorrelated as possible. This worked in Random Forests, for instance, and in this way I tend to understand current Neural Networks as well (particular the maxout+dropout stuff). Of course, this is a narrow view -- there is further this whole representational learning stuff, the not explaining-away property, etc. -- but it's one I find suitable for your question regarding the bias/variance tradeoff.
Second point: there is no better way to prevent overfitting than having very much data. And currently we're in the situation to gather a lot of data.
Related
I'm running a SAC reinforcement learner for a robotics application with some pretty decent results. One of the reasons I opted for reinforcement learning is for the ability for learning in the field, e.g. to adjust to a mechanical change, such as worn tires or a wheel going a little out of alignment.
My reinforcement learner restores it's last saved weights and replay buffer upon startup, so it doesn't need to retrain every time I turn it on. However, one concern I have is with respect to the optimizer.
Optimizers have come a long way since ADAM, but everything I read and all the RL code samples I see still seem to use ADAM with a fixed learning rate. I'd like to take advantage of some of the advances in optimizers, e.g. one cycle AdamW. However, a one-cycle optimizer seems inappropriate for a continuous real-world reinforcement learning problem: I imagine it's pretty good for the initial training/calibration, but I expect the low final learning rate would react too slowly to mechanical changes.
One thought I had was perhaps to do a one-cycle approach for initial training, and triggering a smaller one-cycle restart if a change in error that indicates something has changed (perhaps the size of the restart could be based on the size of the change in error).
Has anyone experimented with optimizers other than ADAM for reinforcement learning or have any suggestions for dealing with this sort of problem?
Reinforcement learning is very different from traditional supervised learning because the training data distribution changes as the policy improves. In optimization terms, the objective function can be said to be non-stationary. For this reason, I suspect your intuition is likely correct -- that a "one-cycle" optimizer would perform poorly after a while in your application.
My question is, what is wrong with Adam? Typically, the choice of optimizer is a minor detail for deep reinforcement learning; other factors like the exploration policy, algorithmic hyperparameters, or network architecture tend to have a much greater impact on performance.
Nevertheless, if you really want to try other optimizers, you could experiment with RMSProp, Adadelta, or Nesterov Momentum. However, my guess is that you will see incremental improvements, if any. Perhaps searching for better hyperparameters to use with Adam would be a more effective use of time.
EDIT: In my original answer, I made the claim that the choice of a particular optimizer is not primarily important for reinforcement learning speed, and neither is generalization. I want to add some discussion that helps illustrate these points.
Consider how most deep policy gradient methods operate: they sample a trajectory of experience from the environment, estimate returns, and then conduct one or more gradient steps to improve the parameterized policy (e.g. a neural network). This process repeats until convergence (to a locally optimal policy).
Why must we continuously sample new experience from the environment? Because our current data can only provide a reasonable first-order approximation within a small trust region around the policy parameters that were used to collect that data. Hence, whenever we update the policy, we need to sample more data.
A good way to visualize this is to consider an MM algorithm. At each iteration, a surrogate objective is constructed based on the data we have now and then maximized. Each time, we will get closer to the true optimum, but the speed at which we approach it is determined only by the number of surrogates we construct -- not by the specific optimizer we use to maximize each surrogate. Adam might maximize each surrogate in fewer gradient steps than, say, RMSProp does, but this does not affect the learning speed of the agent (with respect to environment samples). It just reduces the number of minibatch updates you need to conduct.
SAC is a little more complicated than this, as it learns Q-values in an off-policy manner and conducts updates using experience replay, but the general idea holds. The best attainable policy is subject to whatever the current data in our replay memory are; regardless of the optimizer we use, we will need to sample roughly the same amount of data from the environment to converge to the optimal policy.
So, how do you make a faster (more sample-efficient) policy gradient method? You need to fundamentally change the RL algorithm itself. For example, PPO almost always learns faster than TRPO, because John Schulman and co-authors found a different and empirically better way to generate policy gradient steps.
Finally, notice that there is no notion of generalization here. We have an objective function that we want to optimize, and once we do optimize it, we have solved the task as well as we can. This is why I suspect that the "Adam-generalizes-worse-than-SGD" issue is actually irrelevant for RL.
My initial testing suggest the details of the optimizer and it's hyperparameters matter, at least for off-policy techniques. I haven't had the chance to experiment much with PPO or on-policy techniques, so I can't speak for those unfortunately.
To speak to #Brett_Daley's thoughtful response a bit: the optimizer is certainly one of the less important characteristics. The means of exploration, and the use of a good prioritized replay buffer are certainly critical factors, especially with respect to achieving good initial results. However, my testing seems to show that the optimizer becomes important for the fine-tuning.
The off-policy methods I have been using have been problematic with fine-grained stability. In other words, the RL finds the mostly correct solution, but never really hones in on the perfect solution (or if it does find it briefly, it drifts off). I suspect the optimizer is at least partly to blame.
I did a bit of testing and found that varying the ADAM learning rate has an obvious effect. Too high and both the actor and critic bounce around the minimum and never converge on the optimal policy. In my robotics application this looks like the RL consistently makes sub-optimal decisions, as though there's a bit of random exploration with every action that always misses the mark a little bit.
OTOH, a lower learning rate tends to get stuck in sub-optimal solutions and is unable to adapt to changes (e.g. slower motor response due to low battery).
I haven't yet run any tests of single-cycle schedule or AdamW for the learning rate, but I did a very basic test with a two stage learning rate adjustment for both Actor and Critic (starting with a high rate and dropping to a low rate) and the results were a clearly more precise solution that converged quickly during the high learning rate and then honed in better with the low-learning rate.
I imagine AdamW's better weight decay regularization may result in similarly better results for avoiding overfitting training batches contributing to missing the optimal solution.
Based on the improvement I saw, it's probably worth trying single-cycle methods and AdamW for the actor and critic networks for tuning the results. I still have some concerns for how the lower learning rate at the end of the cycle will adapt to changes in the environment, but a simple solution for that may be to monitor the loss and do a restart of the learning rate if it drifts too much. In any case, more testing seems warranted.
When dealing with ill-conditioned neural networks, is the current state of the art to use an adaptive learning rate, some very sophisticated algorithm to deal with the problem, or to eliminate the ill conditioning by preprocessing/scaling of the data?
The problem can be illustrated with the simplest of scenarios: one input and one output where the function to be learned is y=x/1000, so a single weight whose value needs to be 0.001. One data point (0,0). It turns out to matter a great deal, if you are using gradient descent, whether the second data point is (1000,1) or (1,0.001).
Theoretical discussion of the problem, with expanded examples.
Example in TensorFlow
Of course, straight gradient descent is not the only available algorithm. Other possibilities are discussed at here - however, as that article observes, the alternative algorithms it lists that are good at handling ill condition, are not so good when it comes time to handle a large number of weights.
Are new algorithms available? Yes, but these aren't clearly advertised as solutions for this problem, are perhaps intended to solve a different set of problems; swapping in Adagrad in place of GradientDescent does prevent overshoot, but still converges very slowly.
At one time, there were some efforts to develop heuristics to adaptively tweak the learning rate, but then instead of being just a number, the learning rate hyperparameter is a function, much harder to get right.
So these days, is the state of the art to use a more sophisticated algorithm to deal with ill condition, or to just preprocess/scale the data to avoid the problem in the first place?
till now I was under the impression that machine learning algorithms (gbm, random forest, xgboost etc) can handle bad features (variable) present in the data.
In one of my problems, there are around 150 features and with xgboost I am getting a logloss of around 1 if I use all features. But if I remove around 10 bad features (found using some technique) I am observing a logloss of .45. That is huge improvement.
My question is, can bad features really make such big differences?
No algorithm will be able to deal perfectly with bad data representation, some are better (like deep learning) and some are worse in this manner, but all ML will suffer from bad data representation. This is one of the reasons for modern deep learning and assumption that we should work directly on the raw data instead of hand crafted features (which might be both great and very very missleading).
No -
You are doing something wrong. Most likely the data you are evaluating is statistically different from your training data.
If the features you are talking about are not predictive w.r.t to the training data, they will be ignored by xgboost, so removing them won't impact anything. (Linear models don't have this luxury)
Put up some reproducible code and we can dig deeper.
Deep learning has been a revolution recently and its success is related with the huge amount of data that we can currently manage and the generalization of the GPUs.
So here is the problem I'm facing. I know that deep neural nets have the best performance, there is no doubt about it. However, they have a good performance when the number of training examples is huge. If the number of training examples is low it is better to use a SVM or decision trees.
But what is huge? what is low? In this paper of face recognition (FaceNet by Google) they show the performance vs the flops (which can be related with the number of training examples)
They used between 100M and 200M training examples, which is huge.
My question is:
Is there any method to predict in advance the number of training examples I need to have a good performance in deep learning??? The reason I ask this is because it is a waste of time to manually classify a dataset if the performance is not going to be good.
My question is: Is there any method to predict in advance the number of training examples I need to have a good performance in deep learning??? The reason I ask this is because it is a waste of time to manually classify a dataset if the performance is not going to be good.
The short answer is no. You do not have this kind of knowledge, furthermore you will never have. These kind of problems are impossible to solve, ever.
What you can have are just some general heuristics/empirical knowledge, which will say if it is probable that DL will not work well (as it is possible to predict fail of the method, while nearly impossible to predict the success), nothing more. In current research, DL rarely works well for datasets smaller than hundreads thousands/milions of samples (I do not count MNIST because everything works well on MNIST). Furthermore, DL is heavily studied actually in just two types of problems - NLP and image processing, thus you cannot really extraplate it to any other kind of problems (no free lunch theorem).
Update
Just to make it a bit more clear. What you are asking about is to predit whether given estimator (or set of estimators) will yield a good results given a particular training set. In fact you even restrict just to the size.
The simpliest proof (based on your simplification) is as follows: for any N (sample size) I can construct N-mode (or N^2 to make it even more obvious) distribution which no estimator can reasonably estimate (including deep neural network) and I can construct trivial data with just one label (thus perfect model requires just one sample). End of proof (there are two different answers for the same N).
Now let us assume that we do have access to the training samples (without labels for now) and not just sample size. Now we are given X (training samples) of size N. Again I can construct N-mode labeling yielding impossible to estimate distribution (by anything) and trivial labeling (just a single label!). Again - two different answers for the exact same input.
Ok, so maybe given training samples and labels we can predict what will behave well? Now we cannot manipulate samples nor labels to show that there are no such function. So we have to get back to statistics and what we are trying to answer. We are asking about expected value of loss function over whole probability distribution which generated our training samples. So now again, the whole "clue" is to see, that I can manipulate the underlying distributions (construct many different ones, many of which impossible to model well by deep neural network) and still expect that my training samples come from them. This is what statisticians call the problem of having non-representible sample from a pdf. In particular, in ML, we often relate to this problem with curse of dimensionality. In simple words - in order to estimate the probability well we need enormous number of samples. Silverman shown that even if you know that your data is just a normal distribution and you ask "what is value in 0?" You need exponentialy many samples (as compared to space dimensionality). In practise our distributions are multi-modal, complex and unknown thus this amount is even higher. We are quite safe to say that given number of samples we could ever gather we cannot ever estimate reasonably well distributions with more than 10 dimensions. Consequently - whatever we do to minimize the expected error we are just using heuristics, which connect the empirical error (fitting to the data) with some kind of regularization (removing overfitting, usually by putting some prior assumptions on distributions families). To sum up we cannot construct a method able to distinguish if our model will behave good, because this would require deciding which "complexity" distribution generated our samples. There will be some simple cases when we can do it - and probably they will say something like "oh! this data is so simple even knn will work well!". You cannot have generic tool, for DNN or any other (complex) model though (to be strict - we can have such predictor for very simple models, because they simply are so limited that we can easily check if your data follows this extreme simplicity or not).
Consequently, this boils down nearly to the same question - to actually building a model... thus you will need to try and validate your approach (thus - train DNN to answer if DNN works well). You can use cross validation, bootstraping or anything else here, but all essentialy do the same - build multiple models of your desired type and validate it.
To sum up
I do not claim we will not have a good heuristics, heuristic drive many parts of ML quite well. I only answer if there is a method able to answer your question - and there is no such thing and cannot exist. There can be many rules of thumb, which for some problems (classes of problems) will work well. And we already do have such:
for NLP/2d images you should have ~100,000 samples at least to work with DNN
having lots of unlabeled instances can partially substitute the above number (thus you can have like 30,000 labeled ones + 70,000 unlabeled) with pretty reasonable results
Furthermore this does not mean that given this size of data DNN will be better than kernelized SVM or even linear model. This is exactly what I was refering to earlier - you can easily construct counterexamples of distributions where SVM will work the same or even better despite number of samples. The same applies for any other technique.
Yet still, even if you are just interested if DNN will work well (and not better than others) these are just empirical, trivial heuristics, which are based on at most 10 (!) types of problems. This could be very harmfull to treat these as rules or methods. This are just rough, first intuitions gained through extremely unstructured, random research that happened in last decade.
Ok, so I am lost now... when should I use DL? And the answer is exteremly simple:
Use deep learning only if:
You already tested "shallow" techniques and they do not work well
You have large amounts of data
You have huge computational resources
You have experience with neural networks (this are very tricky and ungreatful models, really)
You have great amount of time to spare, even if you will just get a few % better results as an effect.
can anybody tell me "usually" how many layers does a Deep Neural Network have? How deep is deep enough?
To my knowledge, it is still difficult to say the specific number of the hidden layers. But can anyone tell me, like some examples, how many hidden layers will researchers, developers use in their deep learning projects?
Many thanks.
It totally depends on the problem you try do model. The more layers you have, the harder it's to train the network (more computation power needed). The deeper the layer is however, the more complex problems it can solve.
Geoffrey Hinton wrote in his tutorial:
How many lines of code should an AI program use and how long should
each line be? – This is obviously a silly question.
• Deep belief nets
give the creator a lot of freedom.
– How best to make use of that
freedom depends on the task.
– With enough narrow layers we can model
any distribution over binary vectors (Sutskever & Hinton, 2007)
• If freedom scares you, stick to convex optimization of shallow models
that are obviously inadequate for doing Artificial Intelligence.
From what I know the number of layers usually is not really big. Here ( ImageNet Large-Scale Visual Recognition Challenge 2014) e.g. google team used a net with 22 layers.