Vowpal Wabbit same results always - machine-learning

I am using VW to try to predict multi classes. The strangest part is that it doesn't matter which parameters I use, the result is always the same.
Should that happen, maybe because of my data?
Details:
Around 90k lines of data. A line of the data:
1 2334225|SUBDEPT "D1SUB1" "D2SUB1" |DEPT "DEPT1" "DEPT2" |SCANCODE "11223442" "65434533543" |WDAY Friday |AMTBOUGHT 2
Its a multiclass problem,so the command line is:
vw --ect 38 ../Processed/train.vw.txt --loss_function logistic --link=logistic
The single parameter that changes something is from --ect to --oaa. I have tried adding the following, but none changes the final validation values:
-c -k --passes 20 (goes until 8)
--l1 or --l2
--power_t
--ignore D or --ignore d (or s or su...)
the results are always
average loss = 0.911153 h
Is there something that I am missing here?

Related

Weka RF doesn't give any confusion matrix or expected results

I am using WEKA to classify a small dataset with only 27 instances into a binary classification. I have tried with bigger datasets and weka show the confusion matrix and the other metrics, but with my main and small 27 instances dataset only shows this:
Scheme: weka.classifiers.trees.RandomForest -P 100 -I 100 -num-slots 1 -K 0 -M 1.0 -V 0.001 -S 1
Relation: t_PROMIS_mtbi-weka.filters.unsupervised.attribute.Remove-R1
Instances: 27
Attributes: 7
Var2
Var3
Var4
Var5
Var6
Var7
ERS
Test mode: 10-fold cross-validation
=== Classifier model (full training set) ===
RandomForest
Bagging with 100 iterations and base learner
weka.classifiers.trees.RandomTree -K 0 -M 1.0 -V 0.001 -S 1 -do-not-check-capabilities
Time taken to build model: 0.01 seconds
=== Cross-validation ===
=== Summary ===
Correlation coefficient 0.0348
Mean absolute error 0.4544
Root mean squared error 0.529
Relative absolute error 91.7269 %
Root relative squared error 102.952 %
Total Number of Instances 27
i don't undersantd why this is happening. Is it a size thing?
I have already solved it, The problem was that i was using numbers 1/0 on my class viariable, I changed it for a "Yes"/"No" variable and works.

Is there a reason why a feature only present in a given class is not being predicted strongly into that class?

Summary & Questions
I'm using liblinear 2.30 - I noticed a similar issue in prod, so I tried to isolate it through a simple reduced training with 2 classes, 1 train doc per class, 5 features with same weight in my vocabulary and 1 simple test doc containing only one feature which is present only in class 2.
a) what's the feature value being used for?
b) I wanted to understand why this test document containing a single feature which is only present in one class is not being strongly predicted into that class?
c) I'm not expecting to have different values per features. Is there any other implications by increasing each feature value from 1 to something-else? How can I determine that number?
d) Could my changes affect other more complex trainings in a bad way?
What I tried
Below you will find data related to a simple training (please focus on feature 5):
> cat train.txt
1 1:1 2:1 3:1
2 2:1 4:1 5:1
> train -s 0 -c 1 -p 0.1 -e 0.01 -B 0 train.txt model.bin
iter 1 act 3.353e-01 pre 3.333e-01 delta 6.715e-01 f 1.386e+00 |g| 1.000e+00 CG 1
iter 2 act 4.825e-05 pre 4.824e-05 delta 6.715e-01 f 1.051e+00 |g| 1.182e-02 CG 1
> cat model.bin
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
0.3374141436539016
0
0.3374141436539016
-0.3374141436539016
-0.3374141436539016
0
And this is the output of the model:
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
0.3374141436539016
0
0.3374141436539016
-0.3374141436539016
-0.3374141436539016
0
1 5:10
Below you will find my model's prediction:
> cat test.txt
1 5:1
> predict -b 1 test.txt model.bin test.out
Accuracy = 0% (0/1)
> cat test.out
labels 1 2
2 0.416438 0.583562
And here is where I'm a bit surprised because of the predictions being just [0.42, 0.58] as the feature 5 is only present in class 2. Why?
So I just tried with increasing the feature value for the test doc from 1 to 10:
> cat newtest.txt
1 5:10
> predict -b 1 newtest.txt model.bin newtest.out
Accuracy = 0% (0/1)
> cat newtest.out
labels 1 2
2 0.0331135 0.966887
And now I get a better prediction [0.03, 0.97]. Thus, I tried re-compiling my training again with all features set to 10:
> cat newtrain.txt
1 1:10 2:10 3:10
2 2:10 4:10 5:10
> train -s 0 -c 1 -p 0.1 -e 0.01 -B 0 newtrain.txt newmodel.bin
iter 1 act 1.104e+00 pre 9.804e-01 delta 2.508e-01 f 1.386e+00 |g| 1.000e+01 CG 1
iter 2 act 1.381e-01 pre 1.140e-01 delta 2.508e-01 f 2.826e-01 |g| 2.272e+00 CG 1
iter 3 act 2.627e-02 pre 2.269e-02 delta 2.508e-01 f 1.445e-01 |g| 6.847e-01 CG 1
iter 4 act 2.121e-03 pre 1.994e-03 delta 2.508e-01 f 1.183e-01 |g| 1.553e-01 CG 1
> cat newmodel.bin
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
0.19420510395364846
0
0.19420510395364846
-0.19420510395364846
-0.19420510395364846
0
> predict -b 1 newtest.txt newmodel.bin newtest.out
Accuracy = 0% (0/1)
> cat newtest.out
labels 1 2
2 0.125423 0.874577
And again predictions were still ok for class 2: 0.87
a) what's the feature value being used for?
Each instance of n features is considered as a point in an n-dimensional space, attached with a given label, say +1 or -1 (in your case 1 or 2). A linear SVM tries to find the best hyperplane to separate those instance into two sets, say SetA and SetB. A hyperplane is considered better than other roughly when SetA contains more instances labeled with +1 and SetB contains more those with -1. i.e., more accurate. The best hyperplane is saved as the model. In your case, the hyperplane has formulation:
f(x)=w^T x
where w is the model, e.g (0.33741,0,0.33741,-0.33741,-0.33741) in your first case.
Probability (for LR) formulation:
prob(x)=1/(1+exp(-y*f(x))
where y=+1 or -1. See Appendix L of LIBLINEAR paper.
b) I wanted to understand why this test document containing a single feature which is only present in one class is not being strongly predicted into that class?
Not only 1 5:1 gives weak probability such as [0.42,0.58], if you predict 2 2:1 4:1 5:1 you will get [0.337417,0.662583] which seems that the solver is also not very confident about the result, even the input is exactly the same as the training data set.
The fundamental reason is the value of f(x), or can be simply seen as the distance between x and the hyperplane. It can be 100% confident x belongs to a certain class only if the distance is infinite large (see prob(x)).
c) I'm not expecting to have different values per features. Is there any other implications by increasing each feature value from 1 to something-else? How can I determine that number?
TL;DR
Enlarging both training and test set is like having a larger penalty parameter C (the -c option). Because larger C means a more strict penalty on error, intuitively speaking, the solver has more confidence with the prediction.
Enlarging every feature of the training set is just like having a smaller C.
Specifically, logistic regression solves the following equation for w.
min 0.5 w^T w + C ∑i log(1+exp(−yi w^T xi))
(eq(3) of LIBLINEAR paper)
For most instance, yi w^T xi is positive and larger xi implies smaller ∑i log(1+exp(−yi w^T xi)).
So the effect is somewhat similar to having a smaller C, and a smaller C implies smaller |w|.
On the other hand, enlarging the test set is the same as having a large |w|. Therefore, the effect of enlarging both training and test set is basically
(1). Having smaller |w| when training
(2). Then, having larger |w| when testing
Because the effect is more dramatic in (2) than (1), overall, enlarging both training and test set is like having a larger |w|, or, having a larger C.
We can run on the data set and multiply every features by 10^12. With C=1, we have the model and probability
> cat model.bin.m1e12.c1
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
3.0998430106024949e-12
0
3.0998430106024949e-12
-3.0998430106024949e-12
-3.0998430106024949e-12
0
> cat test.out.m1e12.c1
labels 1 2
2 0.0431137 0.956886
Next we run on the original data set. With C=10^12, we have the probability
> cat model.bin.m1.c1e12
solver_type L2R_LR
nr_class 2
label 1 2
nr_feature 5
bias 0
w
3.0998430101989314
0
3.0998430101989314
-3.0998430101989314
-3.0998430101989314
0
> cat test.out.m1.c1e12
labels 1 2
2 0.0431137 0.956886
Therefore, because larger C means more strict penalty on error, so intuitively the solver has more confident with prediction.
d) Could my changes affect other more complex trainings in a bad way?
From (c) we know your changes is like having a larger C, and that will result in a better training accuracy. But it almost can be sure that the model is over fitting the training set when C goes too large. As a result, the model cannot endure the noise in training set and will perform badly in test accuracy.
As for finding a good C, a popular way is by cross validation (-v option).
Finally,
it may be off-topic but you may want to see how to pre-process the text data. It is common (e.g., suggested by the author of liblinear here) to instance-wise normalize the data.
For document classification, our experience indicates that if you normalize each document to unit length, then not only the training time is shorter, but also the performance is better.

WEKA Changing number of decimal places in predictions

I'm trying to get precise predictions from WEKA, and I need to increase the number of decimal places that it outputs for its prediction data.
My .arff training set looks like this:
#relation TrainSet
#attribute TimeDiff1 numeric
#attribute TimeDiff2 numeric
#attribute TimeDiff3 numeric
#attribute TimeDiff4 numeric
#attribute TimeDiff5 numeric
#attribute TimeDiff6 numeric
#attribute TimeDiff7 numeric
#attribute TimeDiff8 numeric
#attribute TimeDiff9 numeric
#attribute TimeDiff10 numeric
#attribute LBN/Distance numeric
#attribute LBNDiff1 numeric
#attribute LBNDiff2 numeric
#attribute LBNDiff3 numeric
#attribute Size numeric
#attribute RW {R,W}
#attribute 'Response Time' numeric
#data
0,0,0,0,0,0,0,0,0,0,203468398592,0,0,0,32768,R,0.006475
0.004254,0,0,0,0,0,0,0,0,0,4564742206976,4361273808384,0,0,65536,R,0.011025
0.002128,0.006382,0,0,0,0,0,0,0,0,4585966117376,21223910400,4382497718784,0,4096,R,0.01389
0.001616,0.003744,0,0,0,0,0,0,0,0,4590576115200,4609997824,25833908224,4387107716608,4096,R,0.005276
0.002515,0.004131,0.010513,0,0,0,0,0,0,0,233456156672,-4357119958528,-4352509960704,-4331286050304,32768,R,0.01009
0.004332,0.006847,0.010591,0,0,0,0,0,0,0,312887472128,79431315456,-4277688643072,-4273078645248,4096,R,0.005081
0.000342,0.004674,0.008805,0,0,0,0,0,0,0,3773914294272,3461026822144,3540458137600,-816661820928,8704,R,0.004252
0.000021,0.000363,0.00721,0,0,0,0,0,0,0,3772221901312,-1692392960,3459334429184,3538765744640,4096,W,0.00017
0.000042,0.000063,0.004737,0.01525,0,0,0,0,0,0,3832104423424,59882522112,58190129152,3519216951296,16384,W,0.000167
0.005648,0.00569,0.006053,0.016644,0,0,0,0,0,0,312887476224,-3519216947200,-3459334425088,-3461026818048,19456,R,0.009504
I'm trying to get predictions for the Response Time, which is the right-most column. As you can see, my data goes to the 6th decimal place.
However, WEKA's predictions only go to the 3rd. Here are the results of the file named "predictions":
inst# actual predicted error
1 0.006 0.005 -0.002
2 0.011 0.017 0.006
3 0.014 0.002 -0.012
4 0.005 0.022 0.016
5 0.01 0.012 0.002
6 0.005 0.012 0.007
7 0.004 0.018 0.014
8 0 0.001 0
9 0 0.001 0
10 0.01 0.012 0.003
As you can see, this greatly limits the accuracy of my predictions. For very small numbers less than 0.0005 (like row 8 and 9), they will show up as 0 instead of a more accurate smaller decimal number.
I'm using WEKA on the "Simple Command Line" instead of the GUI. My command to build the model looks like this:
java weka.classifiers.trees.REPTree -M 2 -V 0.00001 -N 3 -S 1 -L -1 -I 0.0 -num-decimal-places 6 \
-t [removed path]/TrainSet.arff \
-T [removed path]/TestSet.arff \
-d [removed path]/model1.model > \
[removed path]/model1output
([removed path]: I just removed the full pathname for privacy)
As you can see, I found this "-num-decimal-places" switch for creating the model.
Then I use the following command to make the predictions:
java weka.classifiers.trees.REPTree \
-T [removed path]/LUN0train.arff \
-l [removed path]/model1.model -p 0 > \
[removed path]/predictions
I can't use the "-num-decimal places" switch here because WEKA doesn't allow it in this case for some reason. "predictions" is my wanted predictions file.
So I do these two commands, and it doesn't change the number of decimal places in the prediction! It's still only 3.
I've already looked at this answers, Weka decimal precision, and this answer on the pentaho forum, but no one gave enough information to answer my question. These answers hinted that changing the number of decimal places might not be possible? but I just want to be sure.
Does any one know of an option to fix this? Ideally a solution would be on the command line, but if you only know how to do it in the GUI, that's ok.
I just figured a work around, which is to simply scale/multiply the data by 1000, and then get your predictions, and then multiply it back to 1/1000 when done to get the original scale. Kinda outside the box, but it works.
EDIT: An alternative way to do it: Answer from Peter Reutemann from http://weka.8497.n7.nabble.com/Changing-decimal-point-precision-td43393.html:
This has been around for a long time. ;-) "-p" is the really
old-fashioned way of outputting the predictions. Using the
"-classifications" option, you can specify what format the output is
to be in (eg CSV). The class that you specify with that option has to
be derived from
"weka.classifiers.evaluation.output.prediction.AbstractOutput":
http://weka.sourceforge.net/doc.dev/weka/classifiers/evaluation/output/prediction/AbstractOutput.html
Here is an example of using 12 decimals for the prediction output
using Java:
https://svn.cms.waikato.ac.nz/svn/weka/trunk/wekaexamples/src/main/java/wekaexamples/classifiers/PredictionDecimals.java

Why doesn't Mahout logistic regression give a good AUC when the model is tested on training data?

I'm using the logistic regression of Mahout (version 0.9) but when I check the created model on the same data set it was trained for, I do not see a high value for AUC. I would expect it to be very high since it is the same data set.
My data set is a CSV file with about 7 million lines and has 18 attributes, some numerical and some categorical.
This is how I create the model for logistic regression (I ignore some of the attributes):
$ mahout trainlogistic --input train.csv \
--output ./model \
--categories 2 \
--predictors attribute1 ... attribute15 \
--types w w w n n w w w w w w w n n n \
--target is_delayed \
--rate 100 \
--passes 2 \
--features 500000
And then when I check the AUC value using the model on the same data set:
$ mahout runlogistic --input train.csv --model ./model --auc --confusion
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/lib/hadoop/bin/hadoop and HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: /usr/lib/mahout/mahout-examples-0.9-cdh5.3.0-job.jar
AUC = 0.48
confusion: [[1703477.0, 761921.0], [3034369.0, 1137161.0]]
entropy: [[NaN, NaN], [-16.5, -17.4]]
15/01/18 06:50:50 INFO driver.MahoutDriver: Program took 98213 ms (Minutes: 1.6368833333333332)
I'm really confused why I only get AUC = 0.48, instead of a value of 1.00 or something very close since it is the same data set.
Do I miss something?
I tried with only a few attributes but still very low AUC, around 0.47, that means the model is almost guessing randomly.

How to do proper testing in Weka and how to get desired results ?

I am currently working over a application of ANN, SVM and Linear Regression methods for prediction of fruit yield of a region based on meteorological factors (13 factors )
Total data set is: 36
While Implementing those methods on WEKA I am getting BAD results:
Like in the case of MultilayerPreceptron my results are :
(i divided the dataset with 28 for training and 8 for test )
=== Run information ===
Scheme: weka.classifiers.functions.MultilayerPerceptron -L 0.3 -M 0.2 -N 500 -V 0 -S 0 -E 20 -H a -G -R
Relation: apr6_data
Instances: 28
Attributes: 15
Time taken to build model: 3.69 seconds
=== Predictions on test set ===
inst# actual predicted error
1 2.551 2.36 -0.191
2 2.126 3.079 0.953
3 2.6 1.319 -1.281
4 1.901 3.539 1.638
5 2.146 3.635 1.489
6 2.533 2.917 0.384
7 2.54 2.744 0.204
8 2.82 3.473 0.653
=== Evaluation on test set ===
=== Summary ===
Correlation coefficient -0.4415
Mean absolute error 0.8493
Root mean squared error 1.0065
Relative absolute error 144.2248 %
Root relative squared error 153.5097 %
Total Number of Instances 8
In case of SVM for regression :
inst# actual predicted error
1 2.551 2.538 -0.013
2 2.126 2.568 0.442
3 2.6 2.335 -0.265
4 1.901 2.556 0.655
5 2.146 2.632 0.486
6 2.533 2.24 -0.293
7 2.54 2.766 0.226
8 2.82 3.175 0.355
=== Evaluation on test set ===
=== Summary ===
Correlation coefficient 0.2888
Mean absolute error 0.3417
Root mean squared error 0.3862
Relative absolute error 58.0331 %
Root relative squared error 58.9028 %
Total Number of Instances 8
What can be the possible error in my application ? Please let me know !
Thanks
Do I need to normalize the data ? I guess it is being done by WEKA classifiers.
If you want to normalize data, you have to do it. Preprocess tab - > Filters (choose) -> then find normalize and then click apply.
If you want to discretize your data, you have to follow the same process.
You might have better luck with discretising the prediction e.g. into low/medium/high yield.
You need to normalize or discretize- this cannot be said based on your data or on your single run. For instance, discretization brings in better result for naive baye's classifiers. For SVM- not sure.
I did not see your Precision, Recall or F-score from your data. But as you are saying you have bad results on test set, then it is very possible that your classifier is experiencing overfitting. Try to increase training instances (36 is too less I guess). Keep us posting what is happening when you increase training instances.

Resources