I have a textual data set which has been already classified. I have 7 available classes.
I used (Waikato Environment for Knowledge Analysis) WEKA in building the model. Also with this, I have trained and tested 3 different algorithms to determine which algorithm works best for my data set.
I tried Naive Bayes, J48 and Neural Networks (SMO) which are all available in WEKA's machine learning environment.
During training and testing, found out the ranking of three algorithms in terms of accuracy with the following:
Neural Networks - 98%
Naive Bayes - 90%
J48 - 85%
With the results above, I decided to use the Neural Networks and build the model. I created an application in JAVA and loaded the Neural Networks model built from WEKA.
However, my problem is, the model cannot predict the new data correctly. I am a bit confused as during training and testing I obtained a high accuracy but during deployment the accuracy rate is somewhat 40% only.
I tried to do this in C# and obtained the same results.
Below is a sample code I used.
Instances test = null;
DataSource source = new DataSource("C:\\Users\\Ian\\Desktop\\FINAL\\testdataset.arff");
test = source.getDataSet();
test.setClassIndex(1);
FilteredClassifier cl1 = (FilteredClassifier) weka.core.SerializationHelper.read("C:\\FINAL\\NeuralNetworks.model");
Evaluation evaluation = new Evaluation(test);
evaluation.evaluateModel(cl1,test);
System.out.println("Results:" + evaluation.toSummaryString());
for (int i = 0; i < test.numInstances(); i++) {
String trueClassLabel = test.instance(i).toString(test.classIndex());
double predictionIndex =cl1.classifyInstance(test.instance(i));
String predictedClassLabel;
predictedClassLabel = test.classAttribute().value((int) predictionIndex );
System.out.println((i+1) + "\t" + trueClassLabel + "\t" + predictedClassLabel);
}
Any advise where do you think I did wrong?
After our short chat in the comments, it seems obvious to me that you are overfitting on your training data. This is most likely caused by a neural network architecture which is too overpowered for the problem you are trying to solve. It can be shown that any function can be represented by a neural network with just enough degrees of freedom. Instead of finding a well generalizing solution, the NN memorized the training data during training, leading to a nearly perfect accuracy. But as soon as it has to deal with new data, it can not do it very well, as it didn’t find a proper generalization rule.
In order to solve this problem, you have to reduce the degrees of freedom of your NN. This can be achieved by reducing the number of layers and nodes in each layer. Try to start as simple as only 1 or 2 hidden layers with very few nodes. Then, keep increasing both, nodes and layers until you reach your best performance.
Important: Always measure performance with independent test set, rather than with the same data you trained the model with.
You can find some further hints on this issue here
Related
Suppose one makes a neural network using Keras. Do the trained weights depend on the order in which the training data has been fed into the system ? Is it ok to feed data belonging to one category first and then data belonging to another category or should they be random?
As the training will be done in batches, which means optimizing the weights on data chunk by chunk, the main assumption is that the batches of data are somewhat representative of the dataset. To make it representative it is thus better to randomly sample the data.
Bottomline : It will theoritically learn better if you feed randomly the neural network. I strongly advise yo to shuffle your dataset when you feed it in training mode (and there is an option in the .fit() function).
In inference mode, if you only want to make a forward pass on the neural net, then the order doesn't matter at all since you don't change the weights.
I hope this clarifies things a bit for you :-)
Nassim answer is believed to be True for small networks and datasets but recent articles (or e.g. this one) makes us believe that for deeper networks (with more than 4 layers) - not shuffling your data set might be considered as some kind of regularization - as poor minima are expected to be deep but small and good minima are expected to be wide and hard to leave.
In case of inference time - the only way where this might harm your inference process is when you are using a training distribution of your data in a highly coupled manner - e.g. using BatchNormalization or Dropout like in a training phase (this is sometimes used for some kinds of Bayesian Deep Learning).
I have data set for classification problem. I have in total 50 classes.
Class1: 10,000 examples
Class2: 10 examples
Class3: 5 examples
Class4: 35 examples
.
.
.
and so on.
I tried to train my classifier using SVM ( both linear and Gaussian kernel). My accurate is very bad on test data 65 and 72% respectively. Now I am thinking to go for a neural network. Do you have any suggestion for any machine learning model and algorithm for large imbalanced data? It would be extremely helpful to me
You should provide more information about the data set features and the class distribution, this would help others to advice you.
In any case, I don't think a neural network fits here as this data set is too small for it.
Assuming 50% or more of the samples are of class 1 then I would first start by looking for a classifier that differentiates between class 1 and non-class 1 samples (binary classification). This classifier should outperform a naive classifier (benchmark) which randomly chooses a classification with a prior corresponding to the training set class distribution.
For example, assuming there are 1,000 samples, out of which 700 are of class 1, then the benchmark classifier would classify a new sample as class 1 in a probability of 700/1,000=0.7 (like an unfair coin toss).
Once you found a classifier with good accuracy, the next phase can be classifying the non-class 1 classified samples as one of the other 49 classes, assuming these classes are more balanced then I would start with RF, NB and KNN.
There are multiple ways to handle with imbalanced datasets, you can try
Up sampling
Down Sampling
Class Weights
I would suggest either Up sampling or providing class weights to balance it
https://towardsdatascience.com/5-techniques-to-work-with-imbalanced-data-in-machine-learning-80836d45d30c
You should think about your performance metric, don't use Accuracy score as your performance metric , You can use Log loss or any other suitable metric
https://machinelearningmastery.com/failure-of-accuracy-for-imbalanced-class-distributions/
From my experience the most successful ways to deal with unbalanced classes are :
Changing distribiution of inputs: 20000 samples (the approximate number of examples which you have) is not a big number so you could change your dataset distribiution simply by using every sample from less frequent classes multiple times. Depending on a number of classes you could set the number of examples from them to e.g. 6000 or 8000 each in your training set. In this case remember to not change distribiution on test and validation set.
Increase the time of training: in case of neural networks, when changing distribiution of your input is impossible I strongly advise you trying to learn network for quite a long time (e.g. 1000 epochs). In this case you have to remember about regularisation. I usually use dropout and l2 weight regulariser with their parameters learnt by random search algorithm.
Reduce the batch size: In neural networks case reducing a batch size might lead to improving performance on less frequent classes.
Change your loss function: using MAPE insted of Crossentropy may also improve accuracy on less frequent classes.
Feel invited to test different combinations of approaches shown by e.g. random search algorithm.
Data-level methods:
Undersampling runs the risk of losing important data from removing data. Oversampling runs the risk of overfitting on training data, especially if the added copies of the minority class are replicas of existing data. Many sophisticated sampling techniques have been developed to mitigate these risks.
One such technique is two-phase learning. You first train your model on the resampled data. This resampled data can be achieved by randomly undersampling large classes until each class has only N instances. You then fine-tune your model on the original data.
Another technique is dynamic sampling: oversample the low-performing classes and undersample the high-performing classes during the training process. Introduced by Pouyanfar et al., the method aims to show the model less of what it has already learned and more of what it has not.
Algorithm-level methods
Cost-sensitive learning
Class-balanced loss
Focal loss
References:
esigning Machine Learning Systems
Survey on deep learning with class imbalance
I have a minimal example of a neural network with a back-propagation trainer, testing it on the IRIS data set. I started of with 7 hidden nodes and it worked well.
I lowered the number of nodes in the hidden layer to 1 (expecting it to fail), but was surprised to see that the accuracy went up.
I set up the experiment in azure ml, just to validate that it wasn't my code. Same thing there, 98.3333% accuracy with a single hidden node.
Can anyone explain to me what is happening here?
First, it has been well established that a variety of classification models yield incredibly good results on Iris (Iris is very predictable); see here, for example.
Secondly, we can observe that there are relatively few features in the Iris dataset. Moreover, if you look at the dataset description you can see that two of the features are very highly correlated with the class outcomes.
These correlation values are linear, single-feature correlations, which indicates that one can most likely apply a linear model and observe good results. Neural nets are highly nonlinear; they become more and more complex and capture greater and greater nonlinear feature combinations as the number of hidden nodes and hidden layers is increased.
Taking these facts into account, that (a) there are few features to begin with and (b) that there are high linear correlations with class, would all point to a less complex, linear function as being the appropriate predictive model-- by using a single hidden node, you are very nearly using a linear model.
It can also be noted that, in the absence of any hidden layer (i.e., just input and output nodes), and when the logistic transfer function is used, this is equivalent to logistic regression.
Just adding to DMlash's very good answer: The Iris data set can even be predicted with a very high accuracy (96%) by using just three simple rules on only one attribute:
If Petal.Width = (0.0976,0.791] then Species = setosa
If Petal.Width = (0.791,1.63] then Species = versicolor
If Petal.Width = (1.63,2.5] then Species = virginica
In general neural networks are black boxes where you never really know what they are learning but in this case back-engineering should be easy. It is conceivable that it learnt something like the above.
The above rules were found by using the OneR package.
Given any image I want my classifier to tell if it is Sunflower or not. How can I go about creating the second class ? Keeping the set of all possible images - {Sunflower} in the second class is an overkill. Is there any research in this direction ? Currently my classifier uses a neural network in the final layer. I have based it upon the following tutorial :
https://github.com/torch/tutorials/tree/master/2_supervised
I am taking images with 254x254 as the input.
Would SVM help in the final layer ? Also I am open to using any other classifier/features that might help me in this.
The standard approach in ML is that:
1) Build model
2) Try to train on some data with positive\negative examples (start with 50\50 of pos\neg in training set)
3) Validate it on test set (again, try 50\50 of pos\neg examples in test set)
If results not fine:
a) Try different model?
b) Get more data
For case #b, when deciding which additional data you need the rule of thumb which works for me nicely would be:
1) If classifier gives lots of false positive (tells that this is a sunflower when it is actually not a sunflower at all) - get more negative examples
2) If classifier gives lots of false negative (tells that this is not a sunflower when it is actually a sunflower) - get more positive examples
Generally, start with some reasonable amount of data, check the results, if results on train set or test set are bad - get more data. Stop getting more data when you get the optimal results.
And another thing you need to consider, is if your results with current data and current classifier are not good you need to understand if the problem is high bias (well, bad results on train set and test set) or if it is a high variance problem (nice results on train set but bad results on test set). If you have high bias problem - more data or more powerful classifier will definitely help. If you have a high variance problem - more powerful classifier is not needed and you need to thing about the generalization - introduce regularization, remove couple of layers from your ANN maybe. Also possible way of fighting high variance is geting much, MUCH more data.
So to sum up, you need to use iterative approach and try to increase the amount of data step by step, until you get good results. There is no magic stick classifier and there is no simple answer on how much data you should use.
It is a good idea to use CNN as the feature extractor, peel off the original fully connected layer that was used for classification and add a new classifier. This is also known as the transfer learning technique that has being widely used in the Deep Learning research community. For your problem, using the one-class SVM as the added classifier is a good choice.
Specifically,
a good CNN feature extractor can be trained on a large dataset, e.g. ImageNet,
the one-class SVM can then be trained using your 'sunflower' dataset.
The essential part of solving your problem is the implementation of the one-class SVM, which is also known as anomaly detection or novelty detection. You may refer http://scikit-learn.org/stable/modules/outlier_detection.html for some insights about the method.
We all know that the objective function of SVM is iteratively trained. In order to continue training, at least we can store all the variables used in the iterations if we want to continue on the same training dataset.
While, if we want to train on a slightly different dataset, what should we do to make full use of the previously trained model? Or does this kind of thought make sense? I think it is quite reasonable if we train a K-means model. But I am not sure if it still makes sense for the SVM problem.
There are some literature on this topic:
alpha-seeding, in which the training data is divided into chunks. After you train a SVM on the ith chunk, you take those and use them to train your SVM with the (i+1)th chunk.
Incremental SVM serves as an online learning in which you update a classifier with new examples rather than retrain the entire data set.
SVM heavy package with online SVM training as well.
What you are describing is what an online learning algorithm does and unfortunately the classic definition for SVM is done in a batch fashion.
However, there are several solvers for SVM that produces quasy optimal hypothesis to the underneath optimization problem in an online learning way. In particular my favourite is Pegasos-SVM which can find a good near optimal solution in linear time:
http://ttic.uchicago.edu/~nati/Publications/PegasosMPB.pdf
In general this doesn't make sense. SVM training is an optimization process with regard to every training set vector. Each training vector has an associated coefficient, which as a result is either 0 (irrelevant) or > 0 (support vector). Adding another training vector imposes another, different, optimization problem.
The only way to reuse information from previous training I can think of is to choose support vectors from the previous training and add them to the new training set. I'm not sure, but this probably will negatively affect generalization - VC dimension of an SVM is related to the number of support vectors, so adding previous support vectors to the new dataset is likely to increase the support vector count.
Apparently, there are more possibilities, as noted in lennon310's answer.