Machine Learning - Overfitting The Dataset On Purpose - machine-learning

I learned Machine Learning recently, and I'm developing a Tic Tac Toe engine that predicts the best move in a given Tic Tac Toe position (or board state) as my first project. I used brute force to create all the possible positions for a 3 by 3 board (excluding completed and repeated games), and got 4520 different possible positions. Then I used MinMax to figure out the best move in each one of these positions. Now I want to fit a model to this data to achieve max accuracy. Something that I thought of is:
Since I have all possible positions, why don't I train the model on the whole set (So there won't be a test set), and use a complicated neural network to overfit the data and get 100% accuracy, then it will also be 100% accurate in practical use since it won't encounter any new positions.
The thing is, I notice that people always refer to overfitting as a bad thing, So my questions is: Is this a good practice? And why is it good or not?

Overfitting is a problem when you want your model to generalize to new data. In your case there is no new data, so overfitting is not a problem.
But then, this is not what machine learning is usually used for, in most cases generalization is the whole point, this is why we go at lengths not to overfit.

Related

Handwritten text comparison using Reinforcement Learning

I want to build an RL agent which can justify if a handwritten word is written by the legitimate user or not. The plan is as follow:
Let's say I have written any word 10 times and extracted some geometrical properties for all of them to use as features. Then I have trained an RL agent to learn to take the decision on the basis of the differences between geometrical properties of new and the old 10 handwritten texts. Reward is assigned for correct identification and nothing or negative for incorrect one.
Am I going in the right direction or I am missing anything which is vital? Is it possible to train the agent with only 10 samples? Actally as a new student of RL, I am confused about use case of RL; if it is best fit for game solving and robotic problems or it is also suitable for predicting on the basis of training.
Reinforcement learning would be used over time. If you were following the stroke of the pen, over time, to find out which way it was going that would be more reinforcement learning's wheelhouse. The time dimension (or over a series of states) is why it's used in games like Starcraft II.
You are talking about taking a picture of the text that was written and eventually classifying it into a boolean (Good or Not). You are looking for more Convolutional neural networks to solve your problem (those types of algos are good for pictures).
Eventually you won't be able to tell. There are techniques with GAN's (Generative Adversarial Networks) that can train with your discriminator and finally figure out the pattern it's looking for and fool it. But this sounds good as a homework problem.

How to clarify which model layers to use for machine learning?

We are currently doing a little experiment with machine learning with Deeplearning4j.
We have voltage measurements in time series from different devices that I know that depends on each other.
We manage to labeling huge amount of those data with one and zeroes.
Our problem is to figure out the use of layers for the model.
For us it seems that it is experience that it is used among people and examples seems to be random.
We currently using LSTM and RNN
But how can we clarify if there is better models?
We would like to see if the model can figure out some dependencies through predictions that we haven’t noticed.
The best way to go about this, is to start by looking at your data and what you want to get out of it. Then you should start out by setting up a base line. Use the simplest possible modelling technique you are familiar with just so you have anything at all.
In your case it looks like you have a label for each timestep. So, you might just use simple linear regression for each timestep separately to get a feel for what you would get if you don't incorporate any sequence information at all. Anything that works fast is a good candidate for this step.
Once you have that baseline, you can start looking at building a deeplearning model that outperforms this baseline.
For time series data, you have two options at the moment in DL4J, either you use a recurrent layer like LSTM, or you use convolutions over time.
If you want to have an output at each timestep, then a recurrent layer is probably better for you. The convolutional approach usually works best if you want to have just a single result after reading in the whole sequence.
For choosing how wide those layers should be, and how many layers you should use, you will have to experiment a bit.
The first thing that you want to achieve is to build a model that can over-fit on a subset of your data. So you start out, by passing in only a single batch of examples over and over again. If the model can't overfit on that, you make the layers wider. If the layers start getting too wide, you add another layer on top.
If you use the deeplearning4j-ui module, it will tell you how many parameters your model currently has. They should usually be less than the number of total examples you have, or you risk overfitting on your full data set.
As soon as you can train a model to overfit on a small subset of your data, you can start training it with all of your data.
At that point you can then start looking into finding better hyperparameters and seeing by how much you can beat your baseline.

How to deal with ill-conditioned neural networks?

When dealing with ill-conditioned neural networks, is the current state of the art to use an adaptive learning rate, some very sophisticated algorithm to deal with the problem, or to eliminate the ill conditioning by preprocessing/scaling of the data?
The problem can be illustrated with the simplest of scenarios: one input and one output where the function to be learned is y=x/1000, so a single weight whose value needs to be 0.001. One data point (0,0). It turns out to matter a great deal, if you are using gradient descent, whether the second data point is (1000,1) or (1,0.001).
Theoretical discussion of the problem, with expanded examples.
Example in TensorFlow
Of course, straight gradient descent is not the only available algorithm. Other possibilities are discussed at here - however, as that article observes, the alternative algorithms it lists that are good at handling ill condition, are not so good when it comes time to handle a large number of weights.
Are new algorithms available? Yes, but these aren't clearly advertised as solutions for this problem, are perhaps intended to solve a different set of problems; swapping in Adagrad in place of GradientDescent does prevent overshoot, but still converges very slowly.
At one time, there were some efforts to develop heuristics to adaptively tweak the learning rate, but then instead of being just a number, the learning rate hyperparameter is a function, much harder to get right.
So these days, is the state of the art to use a more sophisticated algorithm to deal with ill condition, or to just preprocess/scale the data to avoid the problem in the first place?

Random forest algorithms able to switch data sets

I'm curious as to whether research been done into random forests that combine unsupervised with supervised learning in a way allowing a single algorithm to find patterns in, and work with, multiple different data sets. I have googled every possible way to find research on this, and have come up empty. Can anyone point me in the right direction?
Note: I have already asked this question in the Data Sciences forum, but it's basically a dead forum so I came here.
(also read the comments and will incorporate the content in my answer)
From what I read between the lines is that you want to use Deep networks in a transfer learning setting. However, this would not be based on decision trees.
http://jmlr.csail.mit.edu/proceedings/papers/v27/mesnil12a/mesnil12a.pdf
There are many elements in your question:
1.) Machine learning algorithms, in general, don't care about the source of your data set. So basically you can feed the learning algorithms 20 different data sets and it will use all of them. However, the data should have the same underlying concept (except in the transfer learning case see below). This means: if you combine cats/dogs data with bills data this will not work or make it much harder for the algorithms. At least all input features need to be identical (exceptions exists), e.g, it is hard to combine images with text.
2.) labeled/unlabeled: Two important terms: a data set is a set of data points with a fixed number of dimensions. Datapoint i might be described as {Xi1,....Xin} where each Xi might for example be a pixel. A label Yi is from another domain, e.g., cats and dogs
3.) unsupervised learning data without any labels. (I have the gut feeling that this is not what you want.
4.) semi-supervised learning: The idea is basically that you combine data where you have labels with data without labels. Basically you have a set of images labeled as cats and dogs {Xi1,..,Xin,Yi} and a second set which contains images with cats/dogs but no labels {Xj1,..,Xjn}. The algorithm can use this information to build better classifiers as the unlabeld data provide information on how images look in general.
3.) transfer learning (I think this come closest to what you want). The Idea is that you provide a data set of cats and dogs and learn a classifier. Afterwards you want to train the classifier with images of cats/dogs/hamster. The training does not need to start from scratch but can use the cats/dogs classifier to converge much faster
4.) feature generation / feature construction The idea is that the algoritm learns features like "eyes". This features are used in the next step to learn the classifier. I'm mainly aware of this in the context of deep learning. Where the algoritm learns in the first step concepts like edges and constructs increasingly complex features like faces cats intolerant it can describe things like "the man on the elephant. This combined with transfer learning is probably what you want. However deep learning is based on Neural networks besides a few exceptions.
5.) outlier detection you provide a data set of cats/dogs as known images. When you provide the cats/dogs/hamster classifier. The classifier tells you that it has never seen something like a hamster before.
6.) active learning The idea is that you don't provide labels for all examples (Data points) beforehand, but that the algorithms asks you to label certain data points. This way you need to label much less data.

Use feedback or reinforcement in machine learning?

I am trying to solve some classification problem. It seems many classical approaches follow a similar paradigm. That is, train a model with some training set and than use it to predict the class labels for new instances.
I am wondering if it is possible to introduce some feedback mechanism into the paradigm. In control theory, introducing a feedback loop is an effective way to improve system performance.
Currently a straight forward approach on my mind is, first we start with a initial set of instances and train a model with them. Then each time the model makes a wrong prediction, we add the wrong instance into the training set. This is different from blindly enlarge the training set because it is more targeting. This can be seen as some kind of negative feedback in the language of control theory.
Is there any research going on with the feedback approach? Could anyone shed some light?
There are two areas of research that spring to mind.
The first is Reinforcement Learning. This is an online learning paradigm that allows you to get feedback and update your policy (in this instance, your classifier) as you observe the results.
The second is active learning, where the classifier gets to select examples from a pool of unclassified examples to get labelled. The key is to have the classifier choose the examples for labelling which best improve its accuracy by choosing difficult examples under the current classifier hypothesis.
I have used such feedback for every machine-learning project I worked on. It allows to train on less data (thus training is faster) than by selecting data randomly. The model accuracy is also improved faster than by using randomly selected training data. I'm working on image processing (computer vision) data so one other type of selection I'm doing is to add clustered false (wrong) data instead of adding every single false data. This is because I assume I will always have some fails, so my definition for positive data is when it is clustered in the same area of the image.
I saw this paper some time ago, which seems to be what you are looking for.
They are basically modeling classification problems as Markov decision processes and solving using the ACLA algorithm. The paper is much more detailed than what I could write here, but ultimately they are getting results that outperform the multilayer perceptron, so this looks like a pretty efficient method.

Resources