I'm trying to test my model with new dataset. I have done the same preprocessing step as i have done for building my model. I have compared two files but there is no issues. I have all the attributes(train vs test dataset) in same order, same attribute names and data types. But still i'm not able to resolve the issue. Both of the files train and test seems to be similar but the weka explorer is giving me error saying Train and test set are not compatible. How to resolve this error? Is there any way to make test.arff file format as train.arff? Please somebody help me.
The same with the comment that I left after problem statement:
All the three attributes are nominal attributes followed by all the possible values quoted by '{}'. One of my guess is that the possible values are not the same. For example, for RESOURCE attribute there is no 199 in test file, while it is in training-file.
After struggling with the same problem for a day. I figured out two ways to make the trained model working on supplied test set.
Method 1.
Use knowledge flow. For example something like below:
CSVLoader(for train set) -> classAssigner -> TrainingSetMaker -->(classifier of your choice) -> ClassfierPerformanceEvaluator - TextViewer.
CSVLoader(for test set) -> classAssigner -> TestgSetMaker -->(the same classifier instance above) -> PredictionAppender -> CSVSaver.
Then load the data from the CSVLoader or arffLoder for the training set. The model will be trained.
After that load data from the loader for the test set. It will evaluate the model(classifier, for example) on the supplied test set and you can see the result from the textviewer (connected to the ClassifierPerformanceEvaluator) and get the saved result from the CSVSaver or arffSaver connected to the PredictionAppender.An additional column, the "classfied as" will be added to the output file.
In my case, I used "?" for the class column in the supplied test set if the class labels are not available.
Method 2.
Combine the Training and Test set into one file. Then the exact same filter can be applied to both training and test set. Then you can separate training set and test set by applying instance filter. Since I use "?" as class label in the test set. It is not visible in the instance filter indices. Hence just select those indices that you can see in the attribute values to be removed when apply the instance filter. You will get the test data left only. Save it and load it in supply test set at the classifier page.This time it will work. I guess it is the class attribute that causes the NOT compatible train and test set issue. As many classfier requires nominal class attribute. The value of which is converted to the index to available values of the class attribute according to
http://weka.wikispaces.com/Why+do+I+get+the+error+message+%27training+and+test+set+are+not+compatible%27%3F
See following answer, your train.arff and test.arff should have same header. According to your comparison they are similar but not same.
I just encountered the same problem and I found a bare-bones solution. The format of my file is .csv and I simply open my files(for training and testing,respectively) and use the save button on the Preprocess panel of WEKA to save them in .arff format.
Then the problem is solved.
Look there is a difference between similar and same, your train.arrf and test.arrf should have the same header and if not then you should copy the header of train.arrf and paste it in your test.arrf as a new header.
trainPath = ""
otherPadelPath = ""
testPath = ""
trainFile = open(trainPath,"r")
trainAttributes = trainFile.readlines()[0].split(",")
trainFile.close()
otherPadelFile = open(otherPadelPath,"r")
otherPadelLines = otherPadelFile.readlines()
otherPadelFile.close()
otherPadelColumns = []
testLines = []
for attribute in trainAttributes:
if attribute in otherPadelLines[0].split(","):
otherPadelColumns += [otherPadelLines[0].split(",").index(attribute)]
for line in otherPadelLines:
rearrangedLine = []
for inDex in otherPadelColumns:
rearrangedLine += [line.split(",")[inDex]]
testLines += [",".join(rearrangedLine)]
testFile = open(testPath,"w")
testFile.writelines(testLines)
testFile.close()
This script can rearrange your test dataset to contain the same order/number of attribute columns in your training set, provided that each attribute has the same type and title. Also, (in keeping with WEKA default), the class attribute should be in the last column for both datasets.
Related
I have some tabular device data comprising a
time column, some tabular features, target classes
There are around 500 rows (not same) in all devices data and target classes are same.
I have same data for around 1000 devices,
I want to train a general model for all the devices for detecting the class.
Can someone help me with the approach to train for the target variable. What kind of models work in this condition
If your device type is part of the data, you can train a decision tree. If the device type feature is important for classification sake, it will be added to the tree. First, create the device type features yourself - a binary column for each device type, like done in one-hot encoding. There will be a binary column per device type - is_device_samsung, is_device_lg, is_device_iphone and so forth. The number of columns created is equal to the number of device types. All but one of these columns will be 0, and the one indicating the current type will be 1. This will not guarantee the device type will be a part of the model - but let the AI decide this for you.
BTW - don't use get_dummies unless you know how to reuse it exactly as needed in the test data.
Another option is to use the python-weka wrapper, which accepts nominal attributes:
Example:
import weka.core.jvm as jvm
from weka.core.converters import Loader
from weka.classifiers import Classifier
def get_weka_prob(inst):
dist = c.distribution_for_instance(inst)
p = dist[next((i for i, x in enumerate(inst.class_attribute.values) if x == 'DONE'), -1)]
return p
jvm.start()
loader = Loader(classname="weka.core.converters.CSVLoader")
data = loader.load_file(r'.\recs_csv\df.csv')
data.class_is_last()
datatst = loader.load_file(r'.\recs_csv\dftst.csv')
datatst.class_is_last()
c = Classifier("weka.classifiers.trees.J48", options=["-C", "0.1"])
c.build_classifier(data)
print(c)
probstst = [get_weka_prob(inst) for inst in datatst]
jvm.stop()
Weka models are different models that use a java bridge to python - the methods are java methods that can be called using this bridge. To use the dataframe in sklearn - you would have to manipulate it with one-hot encoding. Note that the nominal attributes in weka cannot have any special character in them. so use
df = df.replace([',', '"', "'", "%", ";"], '', regex=True)
for any nominal attribute before saving it to csv.
If you want to ensure that the model_type feature will be included in your model, you can trick it and add a dummy model type - and ensure that the class column for this dummy model is always "1" or "True" - depending on your class variable. If you have enough rows with this dummy model - j48 will open it as the first branch. Once the attribute is selected by j48 - it will be branched for all of the model types, not just the dummy one.
I am reading Hands on Machine Learning book and author talks about random seed during train and test split, and at one point of time, the author says over the period Machine will see your whole dataset.
Author is using following function for dividing Tran and Test split,
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
Usage of the function like this:
>>>train_set, test_set = split_train_test(housing, 0.2)
>>> len(train_set)
16512
>>> len(test_set)
4128
Well, this works, but it is not perfect: if you run the program again, it will generate a different test set! Over time, you (or your Machine Learning algorithms) will get to see the whole dataset, which is what you want to avoid.
Sachin Rastogi: Why and how will this impact my model performance? I understand that my model accuracy will vary on each run as Train set will always be different. How my model will see the whole dataset over a time ?
The author is also providing a few solutions,
One solution is to save the test set on the first run and then load it in subsequent runs. Another option is to set the random number generator’s seed (e.g., np.random.seed(42)) before calling np.random.permutation(), so that it always generates the same shuffled indices.
But both these solutions will break next time you fetch an updated dataset. A common solution is to use each instance’s identifier to decide whether or not it should go in the test set (assuming instances have a unique and immutable identifier).
Sachin Rastogi: Will it be a good train/test division? I think No, Train and Test should contain elements from across dataset to avoid any bias from the Train set.
The author is giving an example,
You could compute a hash of each instance’s identifier and put that instance in the test set if the hash is lower or equal to 20% of the maximum hash value. This ensures that the test set will remain consistent across multiple runs, even if you refresh the dataset.
The new test set will contain 20% of the new instances, but it will not contain any instance that was previously in the training set.
Sachin Rastogi: I am not able to understand this solution. Could you please help?
For me, these are the answers:
The point here is that you should better put aside part of your data (which will constitute your test set) before training the model. Indeed, what you want to achieve is to be able to generalize well on unseen examples. By running the code that you have shown, you'll get different test sets through time; in other words, you'll always train your model on different subsets of your data (and possibly on data that you've previously marked as test data). This in turn will affect training and - going to the limit - there will be nothing to generalize to.
This will be indeed a solution satisfying the previous requirement (of having a stable test set) provided that new data are not added.
As said in the comments to your question, by hashing each instance's identifier you can be sure that old instances always get assigned to the same subsets.
Instances that were put in the training set before the update of the dataset will remain there (as their hash value won't change - and so their left-most bit - and it will remain higher than 0.2*max_hash_value);
Instances that were put in the test set before the update of the dataset will remain there (as their hash value won't change and it will remain lower than 0.2*max_hash_value).
The updated test set will contain 20% of the new instances and all of the instances associated to the old test set, letting it remain stable.
I would also suggest to see here for an explanation from the author: https://github.com/ageron/handson-ml/issues/71.
I've been looking into Google Dataprep as an ETL solution to perform some basic data transformation before feeding it to a machine learning platform. I'm wondering if it's possible to use the Dataprep/Dataflow tools to split a dataset into train, test, and validation sets. Ideally I'm looking to do a stratified split on a target column, but for starters I'd settle for a simple uniform random split by percent of whole (e.g. 50% train, 30% validation, 20% test).
So far I haven't been able to find anything about whether this is even possible with Dataprep, so I'm wondering if anyone knows definitively if this is possible and, if so, how to accomplish it.
EDIT 1
Thanks #jakub-janoštík for getting me going in the right direction! I modified your answer slightly and came up with the following (in wrangle form):
case condition: customConditions cases: [false,0] default: rand() as: 'split_condition'
case condition: customConditions cases: [split_condition < 0.6,'train'],[split_condition >= 0.8,'test'] default: 'validation' as: 'dataset_type'
drop col: split_condition action: Drop
By assigning random values in a separate step, I got the guaranteed percentage split I was looking for. The flow ended up looking like this:
Image: final flow diagram with dataset splitting
EDIT 2
I just figured out how to do the stratified split too, so I thought I'd add it in case anyone else is trying to do this. Here's the rough steps:
Split your dataset based on whatever subpopulations you're targeting (e.g. target0, target1)
For each subpopulation, do the uniform random split described above (e.g. now you have target0-train, target0-test, target0-validation, target1-train, etc.)
For each set type (i.e. train, test, validation):
Create a new recipe from one of the sets
Edit the recipe, and use the Union transform to merge it with other datasets of the same type (e.g. target0-train union with target1-train). The union button is in the middle of the toolbar on the Edit Recipe page.
I hope that's helpful to someone!
I'm looking at the same problem and I was able to partially solve this using "case on custom condition" and "Random" functions. What I do is that I create new column named target and apply following logic:
After applying this you'll have new column with these 3 new labels and you can generate 3 new datasets by applying row filtering rules based on those values. Thing to keep in mind is that each time you'll run the job you'll get different validation set. So if you want to keep it fixed you need to use the dataset created in first run as input for future runs (and randomise only train and test sets).
If you need more control on the distribution of labels in your datasets there is ROWNUMBER window function that could potentially be used. But I haven't been able to make it work yet.
I'm trying to create a model with a training dataset and want to label the records in a test data set.
All tutorials or help I find online has information on only using cross validation with one data set, i.e., training dataset. I couldn't find how to use test data. I tried to apply the result model on to the test set. But the test set seems to give different no. of attributes than training set after pre-processing. This is a text classification problem.
At the end I get some output like this
18.03.2013 01:47:00 Results of ResultWriter 'Write as Text (2)' [1]:
18.03.2013 01:47:00 SimpleExampleSet:
5275 examples,
366 regular attributes,
special attributes = {
confidence_1 = #367: confidence(1) (real/single_value)
confidence_5 = #368: confidence(5) (real/single_value)
confidence_2 = #369: confidence(2) (real/single_value)
confidence_4 = #370: confidence(4) (real/single_value)
prediction = #366: prediction(label) (nominal/single_value)/values=[1, 5, 2, 4]
}
But what I wanted is all my examples to be labelled.
It seems that my test data and training data have different no. of attributes, I see many of following in the logs.
Mar 18, 2013 1:46:41 AM WARNING: Kernel Model: The given example set does not contain a regular attribute with name 'wireless'. This might cause problems for some models depending on this particular attribute.
But how do we solve such problem in text classification as we cannot know no. of and name of attributes before hand.
Can some one please throw some pointers.
You probably use a Process Documents operator to preprocess both training and test set. Here it is important that both these operators are setup identically. To "synchronize" the wordlist, i.e. consider the same set of words in both of them, you have to connect the wordlist (wor) output of the Process Documents operator used for training to the corresponding input port of the Process Documents operator used for preprocessing the test set.
I have prepared two different .arff files from two different datasets one for testing and other for training. Each of them have equal instances but different features changing the dimensionality of feature vector for each file. When i did cross-validation on each of these files, they are working perfectly. This shows .arff files are properly prepared and don't have any error.
Now if i use the train file having less dimensionality compared to test file for evaluation. I get a following error.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 5986
at weka.classifiers.bayes.NaiveBayesMultinomial.probOfDocGivenClass(NaiveBayesMultinomial.java:295)
at weka.classifiers.bayes.NaiveBayesMultinomial.distributionForInstance(NaiveBayesMultinomial.java:254)
at weka.classifiers.Evaluation.evaluationForSingleInstance(Evaluation.java:1657)
at weka.classifiers.Evaluation.evaluateModelOnceAndRecordPrediction(Evaluation.java:1694)
at weka.classifiers.Evaluation.evaluateModel(Evaluation.java:1574)
at TrainCrossValidateARFF.main(TrainCrossValidateARFF.java:44)
Does test file in weka requires same or less number of features as train ?
Code for evaluation
public class TrainCrossValidateARFF{
private static DecimalFormat df = new DecimalFormat("#.##");
public static void main(String args[]) throws Exception
{
if (args.length != 1 && args.length != 2) {
System.out.println("USAGE: CrossValidateARFF <arff_file> [<stop_words_file>]");
System.exit(-1);
}
String TrainarffFilePath = args[0];
DataSource ds = new DataSource(TrainarffFilePath);
Instances Train = ds.getDataSet();
Train.setClassIndex(Train.numAttributes() - 1);
String TestarffFilePath = args[1];
DataSource ds1 = new DataSource(TestarffFilePath);
Instances Test = ds1.getDataSet();
// setting class attribute
Test.setClassIndex(Test.numAttributes() - 1);
System.out.println("-----------"+TrainarffFilePath+"--------------");
System.out.println("-----------"+TestarffFilePath+"--------------");
NaiveBayesMultinomial naiveBayes = new NaiveBayesMultinomial();
naiveBayes.buildClassifier(Train);
Evaluation eval = new Evaluation(Train);
eval.evaluateModel(naiveBayes,Test);
System.out.println(eval.toSummaryString("\nResults\n======\n", false));
}
}
Does test file in weka requires same or less number of features as train ? Code for evaluation
Same number of features are necessary. You may need to insert ? for class attribute too.
According to Weka Architect Mark Hall
To be compatible, the header information of the two sets of instances needs to be the same - same
number of attributes, with the same names in the same order. Furthermore, any nominal attributes must
have the same values declared in the same order in both sets of instances.
For unknown class values in your test set just set the value of each to missing - i.e "?".
According to Weka's wiki, the number of features needs to be same for both the training and test sets. Also the type of these features (e.g., nominal, numeric, etc) needs to be the same.
Also, I assume that you didn't apply any Weka filters to either of your datasets. The datasets often become incompatible if you apply filters separately on each dataset (even if it is the same filter).
How do I divide a dataset into training and test set?
You can use the RemovePercentage filter (package weka.filters.unsupervised.instance).
In the Explorer just do the following:
training set:
-Load the full dataset
-select the RemovePercentage filter in the preprocess panel
-set the correct percentage for the split
-apply the filter
-save the generated data as a new file
test set:
-Load the full dataset (or just use undo to revert the changes to the dataset)
-select the RemovePercentage filter if not yet selected
-set the invertSelection property to true
-apply the filter
-save the generated data as new file