How can less amount of data lead to overfitting? - machine-learning

I am studying Machine learning course by Andrew Ng and in it he says that more number of features and less amount of data can lead to overfitting. Can someone elaborate on this.

In general, the less data you have the better your model can memorize the exceptions in your training set which leads to high accuracy on training but low accuracy on test set since your model generalizes what it has learned from the small training set.
For example, consider a Bayesian classifier. We want to predict the math grades of students based on
their grade on science
their last years math grade
their height
As we know the last feature is probably irrelevant. provided we have enough data, our model will learn that this data is irrelevant since there will be a people with different heights getting different grades if we the dataset is big enough.
now consider a very small dataset (e.g. only one class). in this case it's very unlikely that students grades are uncorrelated with their heights (e.g. the tall students will be better or less than average). so our model will be able to make use of that feature. the problem is our model has learned a correlation between grade and height that does not exist outside training dataset.
It could also go the other way, our model might learn that everyone who got a good grade last semester will get a good grade this semester (since that might hold in small datasets) and not use other features at all.
A more general reason, as I mentioned earlier, is that the model can memorize the dataset. There are always outlayer samples, which can't be classified easily. When data size is small the model can find a way to detect these outlayers since there are only few of them. However the it will not be able to predict the real outliers in the test set.

Related

why too many epochs will cause overfitting?

I am reading the a deep learning with python book.
After reading chapter 4, Fighting Overfitting, I have two questions.
Why might increasing the number of epochs cause overfitting?
I know increasing increasing the number of epochs will involve more attempts at gradient descent, will this cause overfitting?
During the process of fighting overfitting, will the accuracy be reduced ?
I'm not sure which book you are reading, so some background information may help before I answer the questions specifically.
Firstly, increasing the number of epochs won't necessarily cause overfitting, but it certainly can do. If the learning rate and model parameters are small, it may take many epochs to cause measurable overfitting. That said, it is common for more training to do so.
To keep the question in perspective, it's important to remember that we most commonly use neural networks to build models we can use for prediction (e.g. predicting whether an image contains a particular object or what the value of a variable will be in the next time step).
We build the model by iteratively adjusting weights and biases so that the network can act as a function to translate between input data and predicted outputs. We turn to such models for a number of reasons, often because we just don't know what the function is/should be or the function is too complex to develop analytically. In order for the network to be able to model such complex functions, it must be capable of being highly-complex itself. Whilst this complexity is powerful, it is dangerous! The model can become so complex that it can effectively remember the training data very precisely but then fail to act as an effective, general function that works for data outside of the training set. I.e. it can overfit.
You can think of it as being a bit like someone (the model) who learns to bake by only baking fruit cake (training data) over and over again – soon they'll be able to bake an excellent fruit cake without using a recipe (training), but they probably won't be able to bake a sponge cake (unseen data) very well.
Back to neural networks! Because the risk of overfitting is high with a neural network there are many tools and tricks available to the deep learning engineer to prevent overfitting, such as the use of dropout. These tools and tricks are collectively known as 'regularisation'.
This is why we use development and training strategies involving test datasets – we pretend that the test data is unseen and monitor it during training. You can see an example of this in the plot below (image credit). After about 50 epochs the test error begins to increase as the model has started to 'memorise the training set', despite the training error remaining at its minimum value (often training error will continue to improve).
So, to answer your questions:
Allowing the model to continue training (i.e. more epochs) increases the risk of the weights and biases being tuned to such an extent that the model performs poorly on unseen (or test/validation) data. The model is now just 'memorising the training set'.
Continued epochs may well increase training accuracy, but this doesn't necessarily mean the model's predictions from new data will be accurate – often it actually gets worse. To prevent this, we use a test data set and monitor the test accuracy during training. This allows us to make a more informed decision on whether the model is becoming more accurate for unseen data.
We can use a technique called early stopping, whereby we stop training the model once test accuracy has stopped improving after a small number of epochs. Early stopping can be thought of as another regularisation technique.
More attempts of decent(large number of epochs) can take you very close to the global minima of the loss function ideally, Now since we don't know anything about the test data, fitting the model so precisely to predict the class labels of the train data may cause the model to lose it generalization capabilities(error over unseen data). In a way, no doubt we want to learn the input-output relationship from the train data, but we must not forget that the end goal is for the model to perform well over the unseen data. So, it is a good idea to stay close but not very close to the global minima.
But still, we can ask what if I reach the global minima, what can be the problem with that, why would it cause the model to perform badly on unseen data?
The answer to this can be that in order to reach the global minima we would be trying to fit the maximum amount of train data, this will result in a very complex model(since it is less probable to have a simpler spatial distribution of the selected number of train data that is fortunately available with us). But what we can assume is that a large amount of unseen data(say for facial recognition) will have a simpler spatial distribution and will need a simpler Model for better classification(I mean the entire world of unseen data, will definitely have a pattern that we can't observe just because we have an access small fraction of it in the form of training data)
If you incrementally observe points from a distribution(say 50,100,500, 1000 ...), we will definitely find the structure of the data complex until we have observed a sufficiently large number of points (max: the entire distribution), but once we have observed enough points we can expect to observe the simpler pattern present in the data that can be easily classified.
In short, a small fraction of train data should have a complex structure as compared to the entire dataset. And overfitting to the train data may cause our model to perform worse on the test data.
One analogous example to emphasize the above phenomenon from day to day life is as follows:-
Say we meet N number of people till date in our lifetime, while meeting them we naturally learn from them(we become what we are surrounded with). Now if we are heavily influenced by each individual and try to tune to the behaviour of all the people very closely, we develop a personality that closely resembles the people we have met but on the other hand we start judging every individual who is unlike me -> unlike the people we have already met. Becoming judgemental takes a toll on our capability to tune in with new groups since we trained very hard to minimize the differences with the people we have already met(the training data). This according to me is an excellent example of overfitting and loss in genralazition capabilities.

What to do with corrected wrongly classified random forest predictions?

I have trained a multi-class Random Forest model and So now if the model predicts something wrong we manually correct it, SO the thing is What can we do to with that corrected label and make the predictions better.
Thoughts:
Can't retrain the model again and again.(Trained on 0.7 million rows so it might treat the new data as noise)
Can not train small models of RF as they will also create a mess
Random FOrest works better then NN, So not thinking to go that way.
What do you mean by "manually correct" - i.e. there may be various different points in the decision trees that were executed leading to a wrong prediction, not to mention the numerous decision trees used to get your final prediction.
I think there is some misunderstanding in your first point. Unless the distribution is non-stationary (in which case your trained model is of diminished value to begin with), the new data is treated is treated as "noise" in the sense that including it in the final model is unlikely to change future predictions all that much. As far as I can tell this is how it should be, without specifying other factors like a changing distribution, etc. That is, if future data you want to predict will look a lot more like the data you failed to predict correctly, then you would indeed want to upweight the importance of classifying this sample in your new model.
Anyway, it sounds like you're describing an online learning problem(you want a model that updates itself in response to streaming data). You can find some general ideas just searching for online random forests, for example:
[Online random forests] (http://www.ymer.org/amir/research/online-random-forests/) and [online multiclass lpboost] (https://github.com/amirsaffari/online-multiclass-lpboost) describe a general framework akin to what you may have in mind: the input to the model is a stream of new observations; the forest learns on this new data by dropping those trees which perform poorly and eventually growing new trees that include the new data.
The general idea described here is used in a number of boosting algorithms (for example, AdaBoost aggregates an ensemble of "weak learners", for example individual decision trees grown on different + incomplete subsets of data, into a better whole by training subsequent weak learners specifically on formerly misclassified instances. The idea here is that those instances where your current model is wrong are the most informative for future performance improvements.
I don't know the specific details of how the linked implementations accomplish this, though the idea is inline with what you might expect.
You might try these, or other such algorithms you find from searching around.
That all said, I suspect something like the online random forest algorithm is relatively good when old data becomes obsolete over time. If it doesn't -- i.e. if your future data and early data are pulled from the same distribution -- it's not obvious to me that successively retraining your model (by which I mean the random forest itself and any cross validation / model selection procedures you might have to transform forest predictions into a final assignment) data on the whole batch of examples you have is a bad idea, modulo data in a very high dimensional feature space, or really quickly incoming data.

Incorporating prior knowledge to machine learning models

Say I have a data set of students with features such as income level, gender, parents' education levels, school, etc. And the target variable is say, passing or failing a national exam. We can train a machine learning model to predict, given these values whether a student is likely to pass or fail (say in sklearn, using predict_prob we can say the probability of passing)
Now say I have a different set of information which has nothing to do with the previous data set, which includes the schools and percentage of students from that particular school who has passed that national exam last year and years before. say, schoolA: 10%, schoolB: 15%, etc.
How can I use this additional knowledge to improve my model. For sure this data is valuable. (Students from certain schools have a higher chance of passing the exam due to their educational facilities, qualified staff, etc.).
Do i some how add this information as a new feature to the data set? If so what is the recommend way. Or do I use this information after the model prediction and somehow combine these to get a final probability ? Obviously an average or a weighted average doesn't work due to the second data set having probabilities in the range below 20% which then drags the total probability very low. How do data scientist usually incorporate this kind of prior knowledge? Thank you
You can try different ways to add this data and see if your model will be able to learn on this set. More likely you'll see right away, that this additional data will just confuse the model. Mostly because you're already providing more precise data on each student of the school and the model has more freedom to use this information.
But artificial neural network training is all about continuous trials and errors, so you definitely should try to train it with all possible data you can imagine to see if it'll be able to get a descent error in the end.
Use the average pass percentage of the students' school as a new feature of each student is worth to try.

Assistance regarding model choice

Im new to &investigating Machine Learning. I have a use case & data but I am unsure of a few things, mainly how my model will run, and what model to start with. Details of the use case and questions are below. Any advice is appreciated.
My Main question is:
When basing a result on scores that are accumulated over time, is it possible to design a model to run on a continuous basis so it gives a best guess at all times, be it run on day one or 3 months into the semester?
What model should I start with? I was thinking a classifier, but ranking might be interesting also.
Use Case Details
Apprentices take a semesterized course, 4 semesters long, each 6 months in duration. Over the course of a semester, apprentices perform various operations and processes & are scored on how well they do. After each semester, the apprentices either have sufficient score to move on to semester 2, or they fail.
We are investigating building a model that will help identify apprentices who are in danger of failing, with enough time for them to receive help.
Each procedure is assigned a complexity code of simple, intermediate or advanced, and are weighted by complexity.
Regarding Features, we have the following: -
Initial interview scores
Entry Exam Scores
Total number of simple procedures each apprentice performed
Total number of intermediate procedures each apprentice performed
Total number of advanced procedures each apprentice performed
Average score for each complexity level
Demograph information (nationality, age, gender)
I am unsure of is how the model will work and when we will run it. i.e. - if we run it on day one of the semester, I assume everyone will fail as everyone has procedure scores of 0
Current plan is to run the model 2-3 months into each semester, so there is enough score data & also enough time to help any apprentices who are in danger of failing.
This definitely looks like a classification model problem:
y = f(x[0],x[1], ..., x[N-1])
where y (boolean output) = {pass, fail} and x[i] are different features.
There is a plethora of ML classification models like Naive Bayes, Neural Networks, Decision Trees, etc. which can be used depending upon the type of the data. In case you are looking for an answer which suggests a particular ML model, then I would need more data for the same. However, in general, this flow-chart can be helpful in selection of the same. You can also read about Model Selection from Andrew-Ng's CS229's 5th lecture.
Now coming back to the basic methodology, some of these features like initial interview scores, entry exam scores, etc. you already know in advance. Whereas, some of them like performance in procedures are known over the semester.
So, there is no harm in saying that the model will always predict better towards the end of each semester.
However, I can make a few suggestions to make it even better:
Instead of taking the initial procedure-scores as 0, take them as a mean/median of the past performances in other procedures by the subject-apprentice.
You can even build a sub-model to analyze the relation between procedure-scores and interview-scores as they are not completely independent. (I will explain this sentence in the later part of the answer)
However, if the semester is very first semester of the subject-apprentice, then you won't have such data already present for that apprentice. In that case, you might need to consider the average performances of other apprentices with similar profiles as the subject-apprentice. If the data-set is not very large, K Nearest Neighbors approach can be quite useful here. However, for large data-sets, KNN suffers from the curse of dimensionality.
Also, plot a graph between y and different variables x[i], so as to see the independent variation of y with respect to each variable.
Most probably (although it's just a hypotheses), y will depend more the initial variables in comparison the variables achieved later. The reason being that the later variables are not completely independent of the former variables.
My point is, if a model can be created to predict the output of a semester, then, a similar model can be created to predict just the output of the 1st procedure-test.
In the end, as the model might be heavily based on demographic factors and other things, it might not be a very successful model. For the same reason, we cannot accurately predict election results, soccer match results, etc. As they are heavily dependent upon real-time dynamic data.
For dynamic predictions based on different procedure performances, Time Series Analysis can be a bit helpful. But in any case, the final result will heavily dependent on the apprentice's continuity in motivation and performance which will become more clear towards the end of each semester.

Biased initial dataset active learning

Does selecting a biased initial(seed) dataset effect the training and accuracy of the machine built using active learning?
It may. Suppose a seed data sample is heavily biased and model has not seen any examples of a particular cluster. Then while predicting, the model may predict them as belonging to some other class and do this with high certainty (i.e. it has gotten heavily biased). And so it wouldn't feel the need to query labels for such data instances and won't learn them. But when we later test model's results with true labels, it will show low accuracy because these were actually wrong predictions.
Having said that, we also may not desire a 'perfectly uniform' distribution of training data in seed dataset, since if we have a considerable number of outliers or incorrect label by human error or heavily skewed but less probable data cluster which can be undesired, it would hamper the model.
One solution can be 'active cleaning' of such instances, or otherwise, we can allow seed data to have some amount of intentional bias (which can be towards high-density clusters or influential labels or ensemble disagreements or uncertainty of model). We then make sure to account for this introduced bias in the model in our further decision-making process based on the model's results.

Resources