Finally I am able to train mahout classifier , now my problem is how can i get target category for my input document.
What is the process of getting target category for my text documents ?
First, you have to vectorize the text document, RandomAccessSparseVector.
Some sample code for your reference:
Vector vector = new RandomAccessSparseVector(FEATURES);
FeatureExtractor fe = new FeatureExtractor();
HashSet<String> fs = fe.extract(text);
for (String s : fs) {
int index = dictionary.get(s);
vector.setQuick(index, frequency.get(index));
}
Then, use the Classifier.classify(Vector) to get the result.
Related
I am using the following code to create my machine learning model. The accuracy of the model is 0.76. I am just curious to know which records from my test data failed? Is there a way I can see those data?
// 1. Load the dataset for training and testing
var trainData = ctx.Data.LoadFromTextFile<SentimentData>(trainDataPath, hasHeader: true);
var testData = ctx.Data.LoadFromTextFile<SentimentData>(testDataPath, hasHeader: true);
// 2. Build a tranformer/estimator to transform input data so that Machine Learning algorithm can understand
IEstimator<ITransformer> estimator = ctx.Transforms.Text.FeaturizeText("Features", nameof(SentimentData.Text));
// 3. - set the training algorithm and create the pipeline for model builder
var trainer = ctx.BinaryClassification.Trainers.SdcaLogisticRegression();
var trainingPipeline = estimator.Append(trainer);
// 4. - Train the model
var trainedModel = trainingPipeline.Fit(trainData);
// 5. - Perform the preditions on the test data
var predictions = trainedModel.Transform(testData);
// 6. - Evalute the model
var metrics = ctx.BinaryClassification.Evaluate(data: predictions);
By using the GetColumn and CreateEnumerable methods, you can find the data that the model didn't predict correctly.
After you the metrics, use the GetColumn method on the predictions that were from the test data set to get the original label values. Then, use the CreateEnuemrable method to get the predictions that will hold the predicted values. Optionally, you can get the sentiment text as well.
var originalLabels = predictions.GetColumn<bool>("Label").ToArray();
var sentimentText = predictions.GetColumn<string>(nameof(SentimentData.SentimentText)).ToArray();
var predictedLabels = context.Data.CreateEnumerable<SentimentPrediction>(predictions, reuseRowObject: false).ToArray();
After getting the data, just loop through one of them (I did a count of the original labels) and you can access the data at each iteration. From there you can check if the actual label doesn't equal the predicted value to only print out the values that the model didn't get correctly.
for (int i = 0; i < originalLabels.Count(); i++)
{
string outputText = String.Empty;
if (originalLabels[i] != predictedLabels[i].Prediction)
{
outputText = $"Text - {sentimentText[i]} | ";
outputText += $"Original - {originalLabels[i]} | ";
outputText += $"Predicted - {predictedLabels[i].Prediction}";
Console.WriteLine(outputText);
}
}
With that you have the data that you need. :)
Hope that helps!
From your comment, I believe the method you are looking for can be found in the keras library. The method should be keras.models.predict_classes as found on their documentation page.
This will provide you with an array of predicted outputs, which you can then compare to the ground truths. Visit the documentation to see the parameters.
Hope this helps!
After investing good amount of searching on net for this topic, I am ending up here if I can get some pointer . please read further
After analyzing Spark 2.0 I concluded polynomial regression is not possible with spark (spark alone), so is there some extension to spark which can be used for polynomial regression?
- Rspark it could be done (but looking for better alternative)
- RFormula in spark does prediction but coefficients are not available (which is my main requirement as I primarily interested in coefficient values)
Polynomial regression is just another case of a linear regression (as in Polynomial regression is linear regression and Polynomial regression). As Spark has a method for linear regression, you can call that method changing the inputs in such a way that the new inputs are the ones suited to polynomial regression. For instance, if you only have one independent variable x, and you want to do quadratic regression, you have to change your independent input matrix for [x x^2].
I would like to add some information to #Mehdi Lamrani’s answer :
If you want to do a polynomial linear regression in SparkML, you may use the class PolynomialExpansion.
For information check the class in the SparkML Doc
or in the Spark API Doc
Here is an implementation example:
Let's assume we have a train and test datasets, stocked in two csv files, with headers containing the neames of the columns (features, label).
Each data set contains three features named f1,f2,f3, each of type Double (this is the X matrix), as well as a label feature (the Y vector) named mylabel.
For this code I used Spark+Scala:
Scala version : 2.12.8
Spark version 2.4.0.
We assume that SparkML library was already downloaded in build.sbt.
First of all, import librairies :
import org.apache.spark.ml.{Pipeline, PipelineModel}
import org.apache.spark.ml.linalg.{Vector, Vectors}
import org.apache.spark.ml.regression.LinearRegression
import org.apache.spark.mllib.evaluation.RegressionMetrics
import org.apache.spark.sql.{DataFrame, SparkSession}
import org.apache.spark.sql.functions.udf
import org.apache.spark.{SparkConf, SparkContext}
Create Spark Session and Spark Context :
val ss = org.apache.spark.sql
.SparkSession.builder()
.master("local")
.appName("Read CSV")
.enableHiveSupport()
.getOrCreate()
val conf = new SparkConf().setAppName("test").setMaster("local[*]")
val sc = new SparkContext(conf)
Instantiate the variables you are going to use :
val f_train:String = "path/to/your/train_file.csv"
val f_test:String = "path/to/your/test_file.csv"
val degree:Int = 3 // Set the degree of your choice
val maxIter:Int = 10 // Set the max number of iterations
val lambda:Double = 0.0 // Set your lambda
val alpha:Double = 0.3 // Set the learning rate
First of all, let's create first several udf-s, which will be used for the data reading and pre-processing.
The arguments' types of the udf toFeatures will be Vector followed by the type of the arguments of the features: (Double,Double,Double)
val toFeatures = udf[Vector, Double, Double, Double] {
(a,b,c) => Vectors.dense(a,b,c)
}
val encodeIntToDouble = udf[Double, Int](_.toDouble)
Now let's create a function which extracts data from CSV and creates, new features from the existing ones, using PolynomialExpansion:
def getDataPolynomial(
currentfile:String,
sc:SparkSession,
sco:SparkContext,
degree:Int
):DataFrame =
{
val df_rough:DataFrame = sc.read
.format("csv")
.option("header", "true") //first line in file has headers
.option("mode", "DROPMALFORMED")
.option("inferSchema", value=true)
.load(currentfile)
.toDF("f1", "f2", "f3", "myLabel")
// you may add or not the last line
val df:DataFrame = df_rough
.withColumn("featNormTemp", toFeatures(df_rough("f1"), df_rough("f2"), df_rough("f3")))
.withColumn("label", Tools.encodeIntToDouble(df_rough("myLabel")))
val polyExpansion = new PolynomialExpansion()
.setInputCol("featNormTemp")
.setOutputCol("polyFeatures")
.setDegree(degree)
val polyDF:DataFrame=polyExpansion.transform(df.select("featNormTemp"))
val datafixedWithFeatures:DataFrame = polyDF.withColumn("features", polyDF("polyFeatures"))
val datafixedWithFeaturesLabel = datafixedWithFeatures
.join(df,df("featNormTemp") === datafixedWithFeatures("featNormTemp"))
.select("label", "polyFeatures")
datafixedWithFeaturesLabel
}
Now, run the function both for the train and test datasets, using the chosen degree for the Polynomial expansion.
val X:DataFrame = getDataPolynomial(f_train,ss,sc,degree)
val X_test:DataFrame = getDataPolynomial(f_test,ss,sc,degree)
Run the algorithm in order to get a model of linear regression, using a pipeline :
val assembler = new VectorAssembler()
.setInputCols(Array("polyFeatures"))
.setOutputCol("features2")
val lr = new LinearRegression()
.setMaxIter(maxIter)
.setRegParam(lambda)
.setElasticNetParam(alpha)
.setFeaturesCol("features2")
.setLabelCol("label")
// Fit the model:
val pipeline:Pipeline = new Pipeline().setStages(Array(assembler,lr))
val lrModel:PipelineModel = pipeline.fit(X)
// Get prediction on the test set :
val result:DataFrame = lrModel.transform(X_test)
Finally, evaluate the result using mean squared error measure :
def leastSquaresError(result:DataFrame):Double = {
val rm:RegressionMetrics = new RegressionMetrics(
result
.select("label","prediction")
.rdd
.map(x => (x(0).asInstanceOf[Double], x(1).asInstanceOf[Double])))
Math.sqrt(rm.meanSquaredError)
}
val error:Double = leastSquaresError(result)
println("Error : "+error)
I hope this might be useful !
I am a beginner with weka.
I have managed to import dataset from the disk (one folder by category, all text related to this category inside the folder), apply StringToWordVector with tokenizer, train a Naive Multniomial categorizer ... The code is below (it is c# but Java is ok of course)
However, I can hardly find information on how to use the categorizer on a project. Say I have a text with unknown category, input by a user, how can I just apply the categorizer to this text and infer the category it belongs to ? (code "// what to do here below").
Any help would be greatly appreciated ;-)
Thanks in advance
Julien
string filepath = #"C:\Users\Julien\Desktop\Meal\";
ClassificationDatasetHelper classHelper = new ClassificationDatasetHelper();
weka.core.converters.TextDirectoryLoader tdl = new
weka.core.converters.TextDirectoryLoader();
tdl.setDirectory(new java.io.File(filepath));
tdl.setCharSet("UTF-8");
weka.core.Instances insts = tdl.getDataSet();
weka.filters.unsupervised.attribute.StringToWordVector swv = new weka.filters.unsupervised.attribute.StringToWordVector();
swv.setInputFormat(insts);
swv.setDoNotOperateOnPerClassBasis(false);
swv.setOutputWordCounts(true);
swv.setWordsToKeep(1000);
swv.setIDFTransform(true);
swv.setMinTermFreq(1);
swv.setDoNotOperateOnPerClassBasis(false);
swv.setPeriodicPruning(-1);
weka.core.tokenizers.NGramTokenizer tokenizer = new weka.core.tokenizers.NGramTokenizer();
tokenizer.setNGramMinSize(2);
tokenizer.setNGramMaxSize(2);
swv.setTokenizer(tokenizer);
insts = weka.filters.Filter.useFilter(insts, swv);
insts.setClassIndex(0);
weka.classifiers.Classifier cl = new weka.classifiers.bayes.NaiveBayesMultinomial();
int trainSize = insts.numInstances() * percentSplit / 100;
int testSize = insts.numInstances() - trainSize;
weka.core.Instances train = new weka.core.Instances(insts, 0, trainSize);
cl.buildClassifier(train);
string s = "Try to classify this text";
weka.core.Instance instanceToClassify = new weka.core.Instance();
// what to do here
// ???
double predictedClass = cl.classifyInstance(instanceToClassify);
Thanks
The best place to learn how to use Weka in your Java app is in the official Weka wiki.
https://waikato.github.io/weka-wiki/use_weka_in_your_java_code/
Basically, you provide a new dataset (the classifier will ignore the category attribute) and you ask it to label each instance for you, like this
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.FileWriter;
import weka.core.Instances;
...
// load unlabeled data
Instances unlabeled = new Instances(
new BufferedReader(
new FileReader("/some/where/unlabeled.arff")));
// set class attribute
unlabeled.setClassIndex(unlabeled.numAttributes() - 1);
// create copy
Instances labeled = new Instances(unlabeled);
// label instances
for (int i = 0; i < unlabeled.numInstances(); i++) {
double clsLabel = tree.classifyInstance(unlabeled.instance(i));
labeled.instance(i).setClassValue(clsLabel);
}
// save labeled data
BufferedWriter writer = new BufferedWriter(
new FileWriter("/some/where/labeled.arff"));
writer.write(labeled.toString());
writer.newLine();
writer.flush();
writer.close();
[Project stack : Java, Opennlp, Elasticsearch (datastore) , twitter4j to read data from twitter]
I intend to use maxent classifier to classify tweets. I understand that the initial step is to train the model. From the documentation I found that we have a GISTrainer based train method to train the model. I have managed to put together a simple piece of code which makes use of opennlp's maxent classifier to train the model and predict the outcome.
I have used two files postive.txt and negative.txt to train the model
Contents of positive.txt
positive This is good
positive This is the best
positive This is fantastic
positive This is super
positive This is fine
positive This is nice
Contents of negative.txt
negative This is bad
negative This is ugly
negative This is the worst
negative This is worse
negative This sucks
And the java methods below generate the outcome.
#Override
public void trainDataset(String source, String destination) throws Exception {
File[] inputFiles = FileUtil.buildFileList(new File(source)); // trains both positive and negative.txt
File modelFile = new File(destination);
Tokenizer tokenizer = SimpleTokenizer.INSTANCE;
CategoryDataStream ds = new CategoryDataStream(inputFiles, tokenizer);
int cutoff = 5;
int iterations = 100;
BagOfWordsFeatureGenerator bowfg = new BagOfWordsFeatureGenerator();
DoccatModel model = DocumentCategorizerME.train("en", ds, cutoff,iterations, bowfg);
model.serialize(new FileOutputStream(modelFile));
}
#Override
public void predict(String text, String modelFile) {
InputStream modelStream = null;
try{
Tokenizer tokenizer = SimpleTokenizer.INSTANCE;
String[] tokens = tokenizer.tokenize(text);
modelStream = new FileInputStream(modelFile);
DoccatModel model = new DoccatModel(modelStream);
BagOfWordsFeatureGenerator bowfg = new BagOfWordsFeatureGenerator();
DocumentCategorizer categorizer = new DocumentCategorizerME(model, bowfg);
double[] probs = categorizer.categorize(tokens);
if(null!=probs && probs.length>0){
for(int i=0;i<probs.length;i++){
System.out.println("double[] probs index " + i + " value " + probs[i]);
}
}
String label = categorizer.getBestCategory(probs);
System.out.println("label " + label);
int bestIndex = categorizer.getIndex(label);
System.out.println("bestIndex " + bestIndex);
double score = probs[bestIndex];
System.out.println("score " + score);
}
catch(Exception e){
e.printStackTrace();
}
finally{
if(null!=modelStream){
try {
modelStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
public static void main(String[] args) {
try {
String outputModelPath = "/home/**/sd-sentiment-analysis/models/trainPostive";
String source = "/home/**/sd-sentiment-analysis/sd-core/src/main/resources/datasets/";
MaximunEntropyClassifier me = new MaximunEntropyClassifier();
me.trainDataset(source, outputModelPath);
me.predict("This is bad", outputModelPath);
} catch (Exception e) {
e.printStackTrace();
}
}
I have the following questions.
1) How do I iteratively train a model? Also, how do I add new sentences/words to the model ? Is there a specific format for the data file? I found that the file needs to have a minimum of two words separated by a tab. Is my understanding valid?
2) Are there any publicly available data sets that I can use to train the model? I found some sources for movie reviews. The project I'm working on involves not just movie reviews but also other things such as product reviews, brand sentiments etc.
3) This helps to an extent. Is there a working example somewhere publicly available? I couldn't find the documentation for maxent.
Please help me out. I am kind'a blocked on this.
1) you can store the samples in a database. I used accumulo once for this. Then at some interval you rebuild the model and reprocess your data.
2) the format is: categoryname space sample newline. No tabs
3) sounds like you want to combine general sentiment with a topic or entity. You could use a name finder or just regex to find the entity or add the entity to your class labels for the doccat include a product name etc , then your samples would have to be very specific
AFAIK, you have to completely retrain a MaxEnt model if you want to add new training samples. It cannot be done incrementally on-line.
The default input format for opennlp maxent is a textual file where each line represents a single sample.
A sample is composed of tokens (features) delimited by whitespace. During training, the first token represents the outcome.
Take a look at my minimal working example here :
Training models using openNLP maxent
I would like to know if there is a way in WEKA to output a number of 'best-guesses' for a classification.
My scenario is: I classify the data with cross-validation for instance, then on weka's output I get something like: these are the 3 best-guesses for the classification of this instance. What I want is like, even if an instance isn't correctly classified i get an output of the 3 or 5 best-guesses for that instance.
Example:
Classes: A,B,C,D,E
Instances: 1...10
And output would be:
instance 1 is 90% likely to be class A, 75% likely to be class B, 60% like to be class C..
Thanks.
Weka's API has a method called Classifier.distributionForInstance() tha can be used to get the classification prediction distribution. You can then sort the distribution by decreasing probability to get your top-N predictions.
Below is a function that prints out: (1) the test instance's ground truth label; (2) the predicted label from classifyInstance(); and (3) the prediction distribution from distributionForInstance(). I have used this with J48, but it should work with other classifiers.
The inputs parameters are the serialized model file (which you can create during the model training phase and applying the -d option) and the test file in ARFF format.
public void test(String modelFileSerialized, String testFileARFF)
throws Exception
{
// Deserialize the classifier.
Classifier classifier =
(Classifier) weka.core.SerializationHelper.read(
modelFileSerialized);
// Load the test instances.
Instances testInstances = DataSource.read(testFileARFF);
// Mark the last attribute in each instance as the true class.
testInstances.setClassIndex(testInstances.numAttributes()-1);
int numTestInstances = testInstances.numInstances();
System.out.printf("There are %d test instances\n", numTestInstances);
// Loop over each test instance.
for (int i = 0; i < numTestInstances; i++)
{
// Get the true class label from the instance's own classIndex.
String trueClassLabel =
testInstances.instance(i).toString(testInstances.classIndex());
// Make the prediction here.
double predictionIndex =
classifier.classifyInstance(testInstances.instance(i));
// Get the predicted class label from the predictionIndex.
String predictedClassLabel =
testInstances.classAttribute().value((int) predictionIndex);
// Get the prediction probability distribution.
double[] predictionDistribution =
classifier.distributionForInstance(testInstances.instance(i));
// Print out the true label, predicted label, and the distribution.
System.out.printf("%5d: true=%-10s, predicted=%-10s, distribution=",
i, trueClassLabel, predictedClassLabel);
// Loop over all the prediction labels in the distribution.
for (int predictionDistributionIndex = 0;
predictionDistributionIndex < predictionDistribution.length;
predictionDistributionIndex++)
{
// Get this distribution index's class label.
String predictionDistributionIndexAsClassLabel =
testInstances.classAttribute().value(
predictionDistributionIndex);
// Get the probability.
double predictionProbability =
predictionDistribution[predictionDistributionIndex];
System.out.printf("[%10s : %6.3f]",
predictionDistributionIndexAsClassLabel,
predictionProbability );
}
o.printf("\n");
}
}
I don't know if you can do it natively, but you can just get the probabilities for each class, sorted them and take the first three.
The function you want is distributionForInstance(Instance instance) which returns a double[] giving the probability for each class.
Not in general. The information you want is not available with all classifiers -- in most cases (for example for decision trees), the decision is clear (albeit potentially incorrect) without a confidence value. Your task requires classifiers that can handle uncertainty (such as the naive Bayes classifier).
Technically the easiest thing to do is probably to train the model and then classify an individual instance, for which Weka should give you the desired output. In general you can of course also do it for sets of instances, but I don't think that Weka provides this out of the box. You would probably have to customise the code or use it through an API (for example in R).
when you calculate a probability for the instance, how exactly do you do this?
I have posted my PART rules and data for the new instance here but as far as calculation manually I am not so sure how to do this! Thanks
EDIT: now calculated:
private float[] getProbDist(String split){
// takes in something such as (52/2) meaning 52 instances correctly classified and 2 incorrectly classified.
if(prob_dis.length > 2)
return null;
if(prob_dis.length == 1){
String temp = prob_dis[0];
prob_dis = new String[2];
prob_dis[0] = "1";
prob_dis[1] = temp;
}
float p1 = new Float(prob_dis[0]);
float p2 = new Float(prob_dis[1]);
// assumes two tags
float[] tag_prob = new float[2];
tag_prob[1] = 1 - tag_prob[1];
tag_prob[0] = (float)p2/p1;
// returns double[] as being the probabilities
return tag_prob;
}