Why RNN LSTM model with 92% accuracy fails on unseen data? - machine-learning

I have built a RNN LSTM model as three class classifier. It is giving around 94% train and test accuracy. But on testing it on real data it is giving pretty inaccurate results.Its application is related to NLP.
My raw dataset format is like
DATA | Class
1. String1 | 0
2. String2 | 1
3. String3 | 2
Accuracy, F1 Score and Confusion Matrix:-
Accuracy = 91.07142857142857
F1 Score = 0.9104914905591774
[[39 3 0]
[ 2 36 4]
[ 0 1 27]]
Graph of training and validation acc
So what I want to achieve is that on feeding some string, if that string contain some common words of class 0 then it should predict that string is from class 0. Any help?

Related

Is there a reason why a feature only present in a given class is not being predicted strongly into that class?

Summary & Questions
I'm using liblinear 2.30 - I noticed a similar issue in prod, so I tried to isolate it through a simple reduced training with 2 classes, 1 train doc per class, 5 features with same weight in my vocabulary and 1 simple test doc containing only one feature which is present only in class 2.
a) what's the feature value being used for?
b) I wanted to understand why this test document containing a single feature which is only present in one class is not being strongly predicted into that class?
c) I'm not expecting to have different values per features. Is there any other implications by increasing each feature value from 1 to something-else? How can I determine that number?
d) Could my changes affect other more complex trainings in a bad way?
What I tried
Below you will find data related to a simple training (please focus on feature 5):
> cat train.txt
1 1:1 2:1 3:1
2 2:1 4:1 5:1
> train -s 0 -c 1 -p 0.1 -e 0.01 -B 0 train.txt model.bin
iter 1 act 3.353e-01 pre 3.333e-01 delta 6.715e-01 f 1.386e+00 |g| 1.000e+00 CG 1
iter 2 act 4.825e-05 pre 4.824e-05 delta 6.715e-01 f 1.051e+00 |g| 1.182e-02 CG 1
> cat model.bin
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
0.3374141436539016
0
0.3374141436539016
-0.3374141436539016
-0.3374141436539016
0
And this is the output of the model:
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
0.3374141436539016
0
0.3374141436539016
-0.3374141436539016
-0.3374141436539016
0
1 5:10
Below you will find my model's prediction:
> cat test.txt
1 5:1
> predict -b 1 test.txt model.bin test.out
Accuracy = 0% (0/1)
> cat test.out
labels 1 2
2 0.416438 0.583562
And here is where I'm a bit surprised because of the predictions being just [0.42, 0.58] as the feature 5 is only present in class 2. Why?
So I just tried with increasing the feature value for the test doc from 1 to 10:
> cat newtest.txt
1 5:10
> predict -b 1 newtest.txt model.bin newtest.out
Accuracy = 0% (0/1)
> cat newtest.out
labels 1 2
2 0.0331135 0.966887
And now I get a better prediction [0.03, 0.97]. Thus, I tried re-compiling my training again with all features set to 10:
> cat newtrain.txt
1 1:10 2:10 3:10
2 2:10 4:10 5:10
> train -s 0 -c 1 -p 0.1 -e 0.01 -B 0 newtrain.txt newmodel.bin
iter 1 act 1.104e+00 pre 9.804e-01 delta 2.508e-01 f 1.386e+00 |g| 1.000e+01 CG 1
iter 2 act 1.381e-01 pre 1.140e-01 delta 2.508e-01 f 2.826e-01 |g| 2.272e+00 CG 1
iter 3 act 2.627e-02 pre 2.269e-02 delta 2.508e-01 f 1.445e-01 |g| 6.847e-01 CG 1
iter 4 act 2.121e-03 pre 1.994e-03 delta 2.508e-01 f 1.183e-01 |g| 1.553e-01 CG 1
> cat newmodel.bin
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
0.19420510395364846
0
0.19420510395364846
-0.19420510395364846
-0.19420510395364846
0
> predict -b 1 newtest.txt newmodel.bin newtest.out
Accuracy = 0% (0/1)
> cat newtest.out
labels 1 2
2 0.125423 0.874577
And again predictions were still ok for class 2: 0.87
a) what's the feature value being used for?
Each instance of n features is considered as a point in an n-dimensional space, attached with a given label, say +1 or -1 (in your case 1 or 2). A linear SVM tries to find the best hyperplane to separate those instance into two sets, say SetA and SetB. A hyperplane is considered better than other roughly when SetA contains more instances labeled with +1 and SetB contains more those with -1. i.e., more accurate. The best hyperplane is saved as the model. In your case, the hyperplane has formulation:
f(x)=w^T x
where w is the model, e.g (0.33741,0,0.33741,-0.33741,-0.33741) in your first case.
Probability (for LR) formulation:
prob(x)=1/(1+exp(-y*f(x))
where y=+1 or -1. See Appendix L of LIBLINEAR paper.
b) I wanted to understand why this test document containing a single feature which is only present in one class is not being strongly predicted into that class?
Not only 1 5:1 gives weak probability such as [0.42,0.58], if you predict 2 2:1 4:1 5:1 you will get [0.337417,0.662583] which seems that the solver is also not very confident about the result, even the input is exactly the same as the training data set.
The fundamental reason is the value of f(x), or can be simply seen as the distance between x and the hyperplane. It can be 100% confident x belongs to a certain class only if the distance is infinite large (see prob(x)).
c) I'm not expecting to have different values per features. Is there any other implications by increasing each feature value from 1 to something-else? How can I determine that number?
TL;DR
Enlarging both training and test set is like having a larger penalty parameter C (the -c option). Because larger C means a more strict penalty on error, intuitively speaking, the solver has more confidence with the prediction.
Enlarging every feature of the training set is just like having a smaller C.
Specifically, logistic regression solves the following equation for w.
min 0.5 w^T w + C ∑i log(1+exp(−yi w^T xi))
(eq(3) of LIBLINEAR paper)
For most instance, yi w^T xi is positive and larger xi implies smaller ∑i log(1+exp(−yi w^T xi)).
So the effect is somewhat similar to having a smaller C, and a smaller C implies smaller |w|.
On the other hand, enlarging the test set is the same as having a large |w|. Therefore, the effect of enlarging both training and test set is basically
(1). Having smaller |w| when training
(2). Then, having larger |w| when testing
Because the effect is more dramatic in (2) than (1), overall, enlarging both training and test set is like having a larger |w|, or, having a larger C.
We can run on the data set and multiply every features by 10^12. With C=1, we have the model and probability
> cat model.bin.m1e12.c1
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
3.0998430106024949e-12
0
3.0998430106024949e-12
-3.0998430106024949e-12
-3.0998430106024949e-12
0
> cat test.out.m1e12.c1
labels 1 2
2 0.0431137 0.956886
Next we run on the original data set. With C=10^12, we have the probability
> cat model.bin.m1.c1e12
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
3.0998430101989314
0
3.0998430101989314
-3.0998430101989314
-3.0998430101989314
0
> cat test.out.m1.c1e12
labels 1 2
2 0.0431137 0.956886
Therefore, because larger C means more strict penalty on error, so intuitively the solver has more confident with prediction.
d) Could my changes affect other more complex trainings in a bad way?
From (c) we know your changes is like having a larger C, and that will result in a better training accuracy. But it almost can be sure that the model is over fitting the training set when C goes too large. As a result, the model cannot endure the noise in training set and will perform badly in test accuracy.
As for finding a good C, a popular way is by cross validation (-v option).
Finally,
it may be off-topic but you may want to see how to pre-process the text data. It is common (e.g., suggested by the author of liblinear here) to instance-wise normalize the data.
For document classification, our experience indicates that if you normalize each document to unit length, then not only the training time is shorter, but also the performance is better.

Does majority class treated as positive in Sklearn? Sklearn calculate False positive rate as False negative rate

I'm working with classification with imbalanced data set using Sklearn. Sklearn has calculated the false_positive_rate and true_positive_rate wrong; when I want to calculate the AUC score, the result is different from what I have gotten from the confusion matrix.
From Sklearn I got the following confusion matrix:
confusion = confusion_matrix(y_test, y_pred)
array([[ 9100, 4320],
[109007, 320068]], dtype=int64)
of course, I understand the output as:
+-----------------------------------+------------------------+
| | Predicted | Predicted |
+-----------------------------------+------------------------+
| Actual | True positive = 9100 | False-negative = 4320 |
| Actual | False-positive = 109007 | True negative = 320068|
+--------+--------------------------+------------------------+
However, for FPR and TPR, I got the following result:
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)
(false_positive_rate, true_positive_rate)
(array([0. , 0.3219076, 1. ]),
array([0. , 0.7459488, 1. ]))
The result is different from the confusion_matrix. According to my table, the FPR is actually FNR, and the TPR is actually TNR. Then I checked the confusion matrix document, I found out that:
Thus, in binary classification, the count of true negatives is C0,0, false negatives is C1,0, true positives is C1,1 and false positives is C0,1.
This means that the confusion_matrix, according to Sklearn, looks like this:
+-----------------------------------+---------------------------+
| | Predicted | Predicted |
+-----------------------------------+---------------------------+
| Actual | True-Positive = 320068 | False-Negative = 109007 |
| Actual | False-Positive = 4320 | True-Negative = 9100 |
+--------+--------------------------+---------------------------+
According to the theory, for binary classification, the rare class is denoted as the positive class.
Why does Sklearn treat the majority class as positive?
After some experiments, I found out that when IsolationForest from sklearn is used for imbalanced data, if you check confusion_matrix, It can be seen that IsolationForest treats the majority (Normal) class as a positive class whereas minor class should be the positive class in Fraud/Outlier/Anomaly detection tasks.
To overcome this challenge there are 2 solutions:
Interpret the confusion matrix results in vice versa. FP instead of FN and TP instead of TN.
If you want to pass the results correctly due to this bad treatment of IF for imbalanced data you can use the following trick:
Typically IF returns -1 for outliers and 1 for inliers so if you replace 1 with -1 and then -1 with 1 in the output from the IsolationForest, Then you could use the standard metric calculations correctly in this case.
IF_model = IsolationForest(max_samples="auto",
random_state=11,
contamination = 0.1,
n_estimators=100,
n_jobs=-1)
IF_model.fit(X_train_sf, y_train_sf)
y_pred_test = IF_model.predict(X_test_sf)
counts = np.unique(y_pred_test, return_counts=True)
#(array([-1, 1]), array([44914, 4154]))
#replace 1 with -1 and then -1 with 1
if (counts[1][0] < counts[1][1] and counts[0][0] == -1) or (counts[1][0] > counts[1][1] and counts[0][0] == 1): y_pred_test = -y_pred_test
Considering confusion matrix documentation, and problem definition here, above trick should work and right form of confusion matrix for Fraud/Outlier/Anomaly detection or Binary classifiers based on litertures Ref.1, Ref.2, Ref.3 is as follows:
+----------------------------+---------------+--------------+
| | Predicted | Predicted |
+----------------------------+---------------+--------------+
| Actual (Positive class)[1] | TP | FN |
| Actual (Negative class)[-1]| FP | TN |
+----------------------------+---------------+--------------+
tn, fp, fn, tp = confusion_matrix(y_test_sf, y_pred_test).ravel()
print("TN: ",tn,"\nFP: ", fp,"\nFN: " ,fn,"\nTP: ", tp)
print("Number of positive class instances: ",tp+fn,"\nNumber of negative class instances: ", tn+fp)
check the evaluation:
print(classification_report(y_test_sf, y_pred_test, target_names=["Anomaly", "Normal"]))

What are some specific examples of Ensemble Learning?

What are some concrete real life examples which can be solved using Boosting/Bagging algorithms? Code snippets would be greatly appreciated.
Ensembles are used to fight overfitting / improve generalization or to fight specific weaknesses / use strength of different classifiers. They can be applied in any classification task.
I used ensembles in my masters thesis. The code is on Github.
Example 1
For example, think of a binary problem where you have to tell if a data point is of class A or B. This could be an image and you have to decide if there is a (A) a dog or (B) a cat on it. Now you have two classifiers (1) and (2) (e.g. two neural networks, but trained in different ways; or one SVM and a decision tree, or ...). They make the following errors:
(1): Predicted
T | A B
R ------------
U A | 90% 10%
E B | 50% 50%
(2): Predicted
T | A B
R ------------
U A | 60% 40%
E B | 40% 60%
You could, for example, combine them to an ensemble by first using (1). If it predicts B, then you can use (2). Otherwise you stick with it.
Now, what would be the expected error, (falsely) assuming both are independent)?
If the true class is A, then we predict with 90% the true result. In 10% of the cases we predict B and use the second classifier. This one gets it right in 60% of the cases. This means if we have A, we predict A in 0.9 + 0.1*0.6 = 0.96 = 96% of the cases.
If the true class is B, we predict in 50% of the cases B. But we also need to get it right the second time, so only in 0.5*0.6 = 0.3 = 30% of the cases we get it right there.
So in this simple example we made the situation better for one class, but worse for the other.
Example 2
Now, lets say we have 3 classifiers with
Predicted
T | A B
R ------------
U A | 60% 40%
E B | 40% 60%
each, but the classifications are independent. What do you get when you make a majority vote?
If you have class A, the probability that at least two say it is class A is
0.6 * 0.6 * 0.6 + 0.6 * 0.6 * 0.4 + 0.6 * 0.4 * 0.6 + 0.4 * 0.6 * 0.6
= 1*0.6^3 + 3*(0.6^2 * 0.4^1)
= (3 nCr 3) * 0.6 + (3 nCr 2) * (0.6^2 * 0.4^1)
= 0.648
The same goes for the other class. So we improved the classifier to
Predicted
T | A B
R ------------
U A | 65% 35%
E B | 35% 65%
Code
See sklearns page on Ensembles for code.
The most specific example of ensemble learning are random forests.
Ensemble is the art of combining diverse set of learners (individual models) together to improvise on the stability and predictive power of the model.
Ensemble Learning Techniques:
Bagging : Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. In generalized bagging, you can use different learners on different population.
Boosting : Boosting is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. Boosting in general decreases the bias error and builds strong predictive models.
Stacking : This is a very interesting way of combining models. Here we use a learner to combine output from different learners. This can lead to decrease in either bias or variance error depending on the combining learner we use.
for more reference:
Basics of Ensemble Learning Explained
Here's Python based pseudo code for basic Ensemble Learning:
# 3 ML/DL models -> first_model, second_model, third_model
all_models = [first_model, second_model, third_model]
first_model.load_weights(first_weight_file)
second_model.load_weights(second_weight_file)
third_model.load_weights(third_weight_file)
def ensemble_average(models: List [Model]): # averaging
outputs = [model.outputs[0] for model in all_models]
y = Average()(outputs)
model = Model(model_input, y, name='ensemble_average')
pred = model.predict(x_test, batch_size = 32)
pred = numpy.argmax(pred, axis=1)
E = numpy.sum(numpy.not_equal(pred, y_test))/ y_test.shape[0]
return E
def ensemble_vote(models: List [Model]): # max-voting
pred = []
yhats = [model.predict(x_test) for model in all_models]
yhats = numpy.argmax(yhats, axis=2)
yhats = numpy.array(yhats)
for i in range(0,len(x_test)):
m = mode([yhats[0][i], yhats[1][i], yhats[2][i]])
pred = numpy.append(pred, m[0])
E = numpy.sum(numpy.not_equal(pred, y_test))/ y_test.shape[0]
return E
# Errors calculation
E1 = ensemble_average(all_models);
E2 = ensemble_vote(all_models);

What does k fold validation mean in the context of POS tagging?

I know that for k-cross validation, I'm supposed to divide the corpus into k equal parts. Of these k parts, I'm to use k-1 parts for training and the remaining 1 part for testing. This process is to be repeated k times, such that each part is used once for testing.
But I don't understand what exactly does training mean and what exactly does testing mean .
What I think is (please correct me if I'm wrong):
1. Training sets (k-1 out of k): These sets are to be used build to the Tag transition probabilities and Emission probabilities tables. And then, apply some algorithm for tagging using these probability tables (Eg. Viterbi Algorithm)
2. Test set (1 set): Use the remaining 1 set to validate the implementation done in step 1. That is, this set from the corpus will have untagged words and I should use the step 1 implementation on this set.
Is my understanding correct? Please explain if not.
Thanks.
I hope this helps:
from nltk.corpus import brown
from nltk import UnigramTagger as ut
# Let's just take the first 100 sentences.
sents = brown.tagged_sents()[:1000]
num_sents = len(sents)
k = 10
foldsize = num_sents/10
fold_accurracies = []
for i in range(10):
# Locate the test set in the fold.
test = sents[i*foldsize:i*foldsize+foldsize]
# Use the rest of the sent not in test for training.
train = sents[:i*foldsize] + sents[i*foldsize+foldsize:]
# Trains a unigram tagger with the train data.
tagger = ut(train)
# Evaluate the accuracy using the test data.
accuracy = tagger.evaluate(test)
print "Fold", i
print 'from sent', i*foldsize, 'to', i*foldsize+foldsize
print 'accuracy =', accuracy
print
fold_accurracies.append(accuracy)
print 'average accuracy =', sum(fold_accurracies)/k
[out]:
Fold 0
from sent 0 to 100
accuracy = 0.785714285714
Fold 1
from sent 100 to 200
accuracy = 0.745431364216
Fold 2
from sent 200 to 300
accuracy = 0.749628896586
Fold 3
from sent 300 to 400
accuracy = 0.743798291989
Fold 4
from sent 400 to 500
accuracy = 0.803448275862
Fold 5
from sent 500 to 600
accuracy = 0.779836277467
Fold 6
from sent 600 to 700
accuracy = 0.772676371781
Fold 7
from sent 700 to 800
accuracy = 0.755679184052
Fold 8
from sent 800 to 900
accuracy = 0.706402915148
Fold 9
from sent 900 to 1000
accuracy = 0.774622079707
average accuracy = 0.761723794252

Learning Weka - Precision and Recall - Wiki example to .Arff file

I'm new to WEKA and advanced statistics, starting from scratch to understand the WEKA measures. I've done all the #rushdi-shams examples, which are great resources.
On Wikipedia the http://en.wikipedia.org/wiki/Precision_and_recall examples explains with an simple example about a video software recognition of 7 dogs detection in a group of 9 real dogs and some cats.
I perfectly understand the example, and the recall calculation.
So my first step, let see in Weka how to reproduce with this data.
How do I create such a .ARFF file?
With this file I have a wrong Confusion Matrix, and the wrong Accuracy By Class
Recall is not 1, it should be 4/9 (0.4444)
#relation 'dogs and cat detection'
#attribute 'realanimal' {dog,cat}
#attribute 'detected' {dog,cat}
#attribute 'class' {correct,wrong}
#data
dog,dog,correct
dog,dog,correct
dog,dog,correct
dog,dog,correct
cat,dog,wrong
cat,dog,wrong
cat,dog,wrong
dog,?,?
dog,?,?
dog,?,?
dog,?,?
dog,?,?
cat,?,?
cat,?,?
Output Weka (without filters)
=== Run information ===
Scheme:weka.classifiers.rules.ZeroR
Relation: dogs and cat detection
Instances: 14
Attributes: 3
realanimal
detected
class
Test mode:10-fold cross-validation
=== Classifier model (full training set) ===
ZeroR predicts class value: correct
Time taken to build model: 0 seconds
=== Stratified cross-validation ===
=== Summary ===
Correctly Classified Instances 4 57.1429 %
Incorrectly Classified Instances 3 42.8571 %
Kappa statistic 0
Mean absolute error 0.5
Root mean squared error 0.5044
Relative absolute error 100 %
Root relative squared error 100 %
Total Number of Instances 7
Ignored Class Unknown Instances 7
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure ROC Area Class
1 1 0.571 1 0.727 0.65 correct
0 0 0 0 0 0.136 wrong
Weighted Avg. 0.571 0.571 0.327 0.571 0.416 0.43
=== Confusion Matrix ===
a b <-- classified as
4 0 | a = correct
3 0 | b = wrong
There must be something wrong with the False Negative dogs,
or is my ARFF approach totally wrong and do I need another kind of attributes?
Thanks
Lets start with the basic definition of Precision and Recall.
Precision = TP/(TP+FP)
Recall = TP/(TP+FN)
Where TP is True Positive, FP is False Positive, and FN is False Negative.
In the above dog.arff file, Weka took into account only the first 7 tuples, it ignored the remaining 7. It can be seen from the above output that it has classified all the 7 tuples as correct(4 correct tuples + 3 wrong tuples).
Lets calculate the precision for correct and wrong class.
First for the correct class:
Prec = 4/(4+3) = 0.571428571
Recall = 4/(4+0) = 1.
For wrong class:
Prec = 0/(0+0)= 0
recall =0/(0+3) = 0

Resources