[Project stack : Java, Opennlp, Elasticsearch (datastore) , twitter4j to read data from twitter]
I intend to use maxent classifier to classify tweets. I understand that the initial step is to train the model. From the documentation I found that we have a GISTrainer based train method to train the model. I have managed to put together a simple piece of code which makes use of opennlp's maxent classifier to train the model and predict the outcome.
I have used two files postive.txt and negative.txt to train the model
Contents of positive.txt
positive This is good
positive This is the best
positive This is fantastic
positive This is super
positive This is fine
positive This is nice
Contents of negative.txt
negative This is bad
negative This is ugly
negative This is the worst
negative This is worse
negative This sucks
And the java methods below generate the outcome.
#Override
public void trainDataset(String source, String destination) throws Exception {
File[] inputFiles = FileUtil.buildFileList(new File(source)); // trains both positive and negative.txt
File modelFile = new File(destination);
Tokenizer tokenizer = SimpleTokenizer.INSTANCE;
CategoryDataStream ds = new CategoryDataStream(inputFiles, tokenizer);
int cutoff = 5;
int iterations = 100;
BagOfWordsFeatureGenerator bowfg = new BagOfWordsFeatureGenerator();
DoccatModel model = DocumentCategorizerME.train("en", ds, cutoff,iterations, bowfg);
model.serialize(new FileOutputStream(modelFile));
}
#Override
public void predict(String text, String modelFile) {
InputStream modelStream = null;
try{
Tokenizer tokenizer = SimpleTokenizer.INSTANCE;
String[] tokens = tokenizer.tokenize(text);
modelStream = new FileInputStream(modelFile);
DoccatModel model = new DoccatModel(modelStream);
BagOfWordsFeatureGenerator bowfg = new BagOfWordsFeatureGenerator();
DocumentCategorizer categorizer = new DocumentCategorizerME(model, bowfg);
double[] probs = categorizer.categorize(tokens);
if(null!=probs && probs.length>0){
for(int i=0;i<probs.length;i++){
System.out.println("double[] probs index " + i + " value " + probs[i]);
}
}
String label = categorizer.getBestCategory(probs);
System.out.println("label " + label);
int bestIndex = categorizer.getIndex(label);
System.out.println("bestIndex " + bestIndex);
double score = probs[bestIndex];
System.out.println("score " + score);
}
catch(Exception e){
e.printStackTrace();
}
finally{
if(null!=modelStream){
try {
modelStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
public static void main(String[] args) {
try {
String outputModelPath = "/home/**/sd-sentiment-analysis/models/trainPostive";
String source = "/home/**/sd-sentiment-analysis/sd-core/src/main/resources/datasets/";
MaximunEntropyClassifier me = new MaximunEntropyClassifier();
me.trainDataset(source, outputModelPath);
me.predict("This is bad", outputModelPath);
} catch (Exception e) {
e.printStackTrace();
}
}
I have the following questions.
1) How do I iteratively train a model? Also, how do I add new sentences/words to the model ? Is there a specific format for the data file? I found that the file needs to have a minimum of two words separated by a tab. Is my understanding valid?
2) Are there any publicly available data sets that I can use to train the model? I found some sources for movie reviews. The project I'm working on involves not just movie reviews but also other things such as product reviews, brand sentiments etc.
3) This helps to an extent. Is there a working example somewhere publicly available? I couldn't find the documentation for maxent.
Please help me out. I am kind'a blocked on this.
1) you can store the samples in a database. I used accumulo once for this. Then at some interval you rebuild the model and reprocess your data.
2) the format is: categoryname space sample newline. No tabs
3) sounds like you want to combine general sentiment with a topic or entity. You could use a name finder or just regex to find the entity or add the entity to your class labels for the doccat include a product name etc , then your samples would have to be very specific
AFAIK, you have to completely retrain a MaxEnt model if you want to add new training samples. It cannot be done incrementally on-line.
The default input format for opennlp maxent is a textual file where each line represents a single sample.
A sample is composed of tokens (features) delimited by whitespace. During training, the first token represents the outcome.
Take a look at my minimal working example here :
Training models using openNLP maxent
Related
I am using the following code to create my machine learning model. The accuracy of the model is 0.76. I am just curious to know which records from my test data failed? Is there a way I can see those data?
// 1. Load the dataset for training and testing
var trainData = ctx.Data.LoadFromTextFile<SentimentData>(trainDataPath, hasHeader: true);
var testData = ctx.Data.LoadFromTextFile<SentimentData>(testDataPath, hasHeader: true);
// 2. Build a tranformer/estimator to transform input data so that Machine Learning algorithm can understand
IEstimator<ITransformer> estimator = ctx.Transforms.Text.FeaturizeText("Features", nameof(SentimentData.Text));
// 3. - set the training algorithm and create the pipeline for model builder
var trainer = ctx.BinaryClassification.Trainers.SdcaLogisticRegression();
var trainingPipeline = estimator.Append(trainer);
// 4. - Train the model
var trainedModel = trainingPipeline.Fit(trainData);
// 5. - Perform the preditions on the test data
var predictions = trainedModel.Transform(testData);
// 6. - Evalute the model
var metrics = ctx.BinaryClassification.Evaluate(data: predictions);
By using the GetColumn and CreateEnumerable methods, you can find the data that the model didn't predict correctly.
After you the metrics, use the GetColumn method on the predictions that were from the test data set to get the original label values. Then, use the CreateEnuemrable method to get the predictions that will hold the predicted values. Optionally, you can get the sentiment text as well.
var originalLabels = predictions.GetColumn<bool>("Label").ToArray();
var sentimentText = predictions.GetColumn<string>(nameof(SentimentData.SentimentText)).ToArray();
var predictedLabels = context.Data.CreateEnumerable<SentimentPrediction>(predictions, reuseRowObject: false).ToArray();
After getting the data, just loop through one of them (I did a count of the original labels) and you can access the data at each iteration. From there you can check if the actual label doesn't equal the predicted value to only print out the values that the model didn't get correctly.
for (int i = 0; i < originalLabels.Count(); i++)
{
string outputText = String.Empty;
if (originalLabels[i] != predictedLabels[i].Prediction)
{
outputText = $"Text - {sentimentText[i]} | ";
outputText += $"Original - {originalLabels[i]} | ";
outputText += $"Predicted - {predictedLabels[i].Prediction}";
Console.WriteLine(outputText);
}
}
With that you have the data that you need. :)
Hope that helps!
From your comment, I believe the method you are looking for can be found in the keras library. The method should be keras.models.predict_classes as found on their documentation page.
This will provide you with an array of predicted outputs, which you can then compare to the ground truths. Visit the documentation to see the parameters.
Hope this helps!
I am using Weka 3.7 to classify text documents based on their content. I have a set of text files in folders and they all belong to a certain category.
Category A: 100 txt files
Category B: 100 txt files
...
Category X: 100 txt files
I want to predict if a document falls into one of the categories A-X, OR if it falls in the category UNRECOGNISED (for all other documents).
I am getting the total set of Instances programatically like this:
private Instances getTotalSet(){
ArrayList<Attribute> listOfAttributes = new ArrayList<Attribute>(2);
Attribute classAttribute = getClassAttribute();
listOfAttributes.add(classAttribute);
listOfAttributes.add(new Attribute("text", (ArrayList) null));
Instances totalSet = new Instances("Rel", listOfAttributes,2);
totalSet.setClassIndex(1);
File[] classNamesFolders = new File(path).listFiles((FileFilter) FileFilterUtils.directoryFileFilter());
for(File folder: classNamesFolders){
if(folder.getName().equals("UNRECOGNISED")){
continue;
}
System.out.println("Adding "+folder.getName());
//all txt files in that subfolder
for(File file : FileUtils.listFiles(folder.getAbsoluteFile(), new SuffixFileFilter(".txt"), DirectoryFileFilter.DIRECTORY)){
try {
Instance instance = new DenseInstance(2);
instance.setValue(listOfAttributes.get(0), folder.getName());
instance.setValue(listOfAttributes.get(1), FileUtils.readFileToString(file.getAbsoluteFile()));
totalSet.add(instance);
}catch(IOException e){
System.out.println("Couldn't add "+e);
}
}
}
return totalSet;
}
I am using a RandomForest classifier in this case, (but that shouldn't make a difference for my question)
RandomForest rf = new RandomForest();
rf.setNumTrees(500);
rf.setMaxDepth(25);
rf.setSeed(1);
System.out.println("Building random forest with " + rf.getNumTrees() + " trees");
rf.buildClassifier(train);
When I make a prediction, I can see in which category the new document should fall, but how can I find out if the document should not belong in any category. While making the prediction I can access the
double pred = rf.classifyInstance(test.instance(i));
double dist[] = rf.distributionForInstance(test.instance(i));
distribution for the instance, but how can I disambiguate if a document should not be recognised at all and have the category UNRECOGNISED.
I am trying to extract topic from 7 millons of Twitter data. I have assumed each tweet as a document. So, I stored all tweets in a file where each line (or tweet) treated as a document. I used this file as a input file for Mallet api.
public static void LDAModel(int numofK,int numbofIteration,int numberofThread,String outputDir,InstanceList instances) throws Exception
{
// Create a model with 100 topics, alpha_t = 0.01, beta_w = 0.01
// Note that the first parameter is passed as the sum over topics, while
// the second is the parameter for a single dimension of the Dirichlet prior.
int numTopics = numofK;
ParallelTopicModel model = new ParallelTopicModel(numTopics, 1.0, 0.01);
model.addInstances(instances);
// Use two parallel samplers, which each look at one half the corpus and combine
// statistics after every iteration.
model.setNumThreads(numberofThread);
// Run the model for 50 iterations and stop (this is for testing only,
// for real applications, use 1000 to 2000 iterations)
model.setNumIterations(numbofIteration);
model.estimate();
// Show the words and topics in the first instance
// The data alphabet maps word IDs to strings
Alphabet dataAlphabet = instances.getDataAlphabet();
FeatureSequence tokens = (FeatureSequence) model.getData().get(0).instance.getData();
LabelSequence topics = model.getData().get(0).topicSequence;
Formatter out = new Formatter(new StringBuilder(), Locale.US);
for (int position = 0; position < tokens.getLength(); position++) {
// out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
}
System.out.println(out);
// Estimate the topic distribution of the first instance,
// given the current Gibbs state.
double[] topicDistribution = model.getTopicProbabilities(0);
// Get an array of sorted sets of word ID/count pairs
ArrayList<TreeSet<IDSorter>> topicSortedWords = model.getSortedWords();
// Show top 10 words in topics with proportions for the first document
String topicsoutput="";
for (int topic = 0; topic < numTopics; topic++) {
Iterator<IDSorter> iterator = topicSortedWords.get(topic).iterator();
out = new Formatter(new StringBuilder(), Locale.US);
out.format("%d\t%.3f\t", topic, topicDistribution[topic]);
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
out.format("%s (%.0f) ", dataAlphabet.lookupObject(idCountPair.getID()), idCountPair.getWeight());
//out.format("%s ", dataAlphabet.lookupObject(idCountPair.getID()));
rank++;
}
System.out.println(out);
}
// Create a new instance with high probability of topic 0
StringBuilder topicZeroText = new StringBuilder();
Iterator<IDSorter> iterator = topicSortedWords.get(0).iterator();
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
topicZeroText.append(dataAlphabet.lookupObject(idCountPair.getID()) + " ");
rank++;
}
// Create a new instance named "test instance" with empty target and source fields.
InstanceList testing = new InstanceList(instances.getPipe());
testing.addThruPipe(new Instance(topicZeroText.toString(), null, "test instance", null));
TopicInferencer inferencer = model.getInferencer();
double[] testProbabilities = inferencer.getSampledDistribution(testing.get(0), 10, 1, 5);
System.out.println("0\t" + testProbabilities[0]);
File pathDir = new File(outputDir + File.separator+ "NumofTopics"+numTopics); //FIXME replace all strings with constants
pathDir.mkdir();
String DirPath = pathDir.getPath();
String stateFile = DirPath+File.separator+"output_state.gz";
String outputDocTopicsFile = DirPath+File.separator+"output_doc_topics.txt";
String topicKeysFile = DirPath+File.separator+"output_topic_keys";
PrintWriter writer=null;
String topicKeysFile_fromProgram = DirPath+File.separator+"output_topic";
try {
writer = new PrintWriter(topicKeysFile_fromProgram, "UTF-8");
writer.print(topicsoutput);
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
model.printTopWords(new File(topicKeysFile), 11, false);
model.printDocumentTopics(new File (outputDocTopicsFile));
model.printState(new File (stateFile));
}
public static void main(String[] args) throws Exception{
// Begin by importing documents from text to feature sequences
ArrayList<Pipe> pipeList = new ArrayList<Pipe>();
// Pipes: lowercase, tokenize, remove stopwords, map to features
pipeList.add( new CharSequenceLowercase() );
pipeList.add( new CharSequence2TokenSequence(Pattern.compile("\\p{L}[\\p{L}\\p{P}]+\\p{L}")) );
pipeList.add( new TokenSequenceRemoveStopwords(new File("H:\\Data\\stoplists\\en.txt"), "UTF-8", false, false, false) );
pipeList.add( new TokenSequence2FeatureSequence() );
InstanceList instances = new InstanceList (new SerialPipes(pipeList));
Reader fileReader = new InputStreamReader(new FileInputStream(new File("E:\\Thesis Data\\DataForLDA\\freshnewData\\cleanTweets.txt")), "UTF-8");
instances.addThruPipe(new CsvIterator (fileReader, Pattern.compile("^(\\S*)[\\s,]*(\\S*)[\\s,]*(.*)$"),
3, 2, 1)); // data, label, name fields
int numberofTopic=5;
int numberofIteration=50;
int numberofThread=6;
String outputDir="J:\\Topics\\";
//int numberofTopic=5;
LDAModel(numberofTopic,numberofIteration,numberofThread,outputDir,instances);
TimeUnit.SECONDS.sleep(30);
numberofTopic=10; }
I have got three files from the above program.
1. state file
2. topic proportion file
3. key topic list
I would like to find out the number of documents allocated per topic.
For example I got the following output from key topic list file
0.004 obama (5471) canada (5283) woman (5152) vote (4879) police(3965)
where first column means topic serial number, second column means topic weight, third column means words under this topic (number of words)
Here, I got number of words under this topic but I would also like to show the number of documents where I got this topic. It would be helpful to show this output as a separate file like this. For example,
Topic 1: doc1(80%) doc2(70%) .......
Could anyone please give some idea or any source code for this?
Thanks.
The information you are looking for is contained in the file "2. topic proportion" you mentioned. Note that every document contains each topic with some percentage (although the percentages may be large for one topic and extremly small for others). You will have to decide what you want to extract from the file: The dominant topic (it is in column 3); The dominant topic, but only when the percentage is at least 50% (sometimes, two topics have almost the same percentage) ...
Finally I am able to train mahout classifier , now my problem is how can i get target category for my input document.
What is the process of getting target category for my text documents ?
First, you have to vectorize the text document, RandomAccessSparseVector.
Some sample code for your reference:
Vector vector = new RandomAccessSparseVector(FEATURES);
FeatureExtractor fe = new FeatureExtractor();
HashSet<String> fs = fe.extract(text);
for (String s : fs) {
int index = dictionary.get(s);
vector.setQuick(index, frequency.get(index));
}
Then, use the Classifier.classify(Vector) to get the result.
I would like to know if there is a way in WEKA to output a number of 'best-guesses' for a classification.
My scenario is: I classify the data with cross-validation for instance, then on weka's output I get something like: these are the 3 best-guesses for the classification of this instance. What I want is like, even if an instance isn't correctly classified i get an output of the 3 or 5 best-guesses for that instance.
Example:
Classes: A,B,C,D,E
Instances: 1...10
And output would be:
instance 1 is 90% likely to be class A, 75% likely to be class B, 60% like to be class C..
Thanks.
Weka's API has a method called Classifier.distributionForInstance() tha can be used to get the classification prediction distribution. You can then sort the distribution by decreasing probability to get your top-N predictions.
Below is a function that prints out: (1) the test instance's ground truth label; (2) the predicted label from classifyInstance(); and (3) the prediction distribution from distributionForInstance(). I have used this with J48, but it should work with other classifiers.
The inputs parameters are the serialized model file (which you can create during the model training phase and applying the -d option) and the test file in ARFF format.
public void test(String modelFileSerialized, String testFileARFF)
throws Exception
{
// Deserialize the classifier.
Classifier classifier =
(Classifier) weka.core.SerializationHelper.read(
modelFileSerialized);
// Load the test instances.
Instances testInstances = DataSource.read(testFileARFF);
// Mark the last attribute in each instance as the true class.
testInstances.setClassIndex(testInstances.numAttributes()-1);
int numTestInstances = testInstances.numInstances();
System.out.printf("There are %d test instances\n", numTestInstances);
// Loop over each test instance.
for (int i = 0; i < numTestInstances; i++)
{
// Get the true class label from the instance's own classIndex.
String trueClassLabel =
testInstances.instance(i).toString(testInstances.classIndex());
// Make the prediction here.
double predictionIndex =
classifier.classifyInstance(testInstances.instance(i));
// Get the predicted class label from the predictionIndex.
String predictedClassLabel =
testInstances.classAttribute().value((int) predictionIndex);
// Get the prediction probability distribution.
double[] predictionDistribution =
classifier.distributionForInstance(testInstances.instance(i));
// Print out the true label, predicted label, and the distribution.
System.out.printf("%5d: true=%-10s, predicted=%-10s, distribution=",
i, trueClassLabel, predictedClassLabel);
// Loop over all the prediction labels in the distribution.
for (int predictionDistributionIndex = 0;
predictionDistributionIndex < predictionDistribution.length;
predictionDistributionIndex++)
{
// Get this distribution index's class label.
String predictionDistributionIndexAsClassLabel =
testInstances.classAttribute().value(
predictionDistributionIndex);
// Get the probability.
double predictionProbability =
predictionDistribution[predictionDistributionIndex];
System.out.printf("[%10s : %6.3f]",
predictionDistributionIndexAsClassLabel,
predictionProbability );
}
o.printf("\n");
}
}
I don't know if you can do it natively, but you can just get the probabilities for each class, sorted them and take the first three.
The function you want is distributionForInstance(Instance instance) which returns a double[] giving the probability for each class.
Not in general. The information you want is not available with all classifiers -- in most cases (for example for decision trees), the decision is clear (albeit potentially incorrect) without a confidence value. Your task requires classifiers that can handle uncertainty (such as the naive Bayes classifier).
Technically the easiest thing to do is probably to train the model and then classify an individual instance, for which Weka should give you the desired output. In general you can of course also do it for sets of instances, but I don't think that Weka provides this out of the box. You would probably have to customise the code or use it through an API (for example in R).
when you calculate a probability for the instance, how exactly do you do this?
I have posted my PART rules and data for the new instance here but as far as calculation manually I am not so sure how to do this! Thanks
EDIT: now calculated:
private float[] getProbDist(String split){
// takes in something such as (52/2) meaning 52 instances correctly classified and 2 incorrectly classified.
if(prob_dis.length > 2)
return null;
if(prob_dis.length == 1){
String temp = prob_dis[0];
prob_dis = new String[2];
prob_dis[0] = "1";
prob_dis[1] = temp;
}
float p1 = new Float(prob_dis[0]);
float p2 = new Float(prob_dis[1]);
// assumes two tags
float[] tag_prob = new float[2];
tag_prob[1] = 1 - tag_prob[1];
tag_prob[0] = (float)p2/p1;
// returns double[] as being the probabilities
return tag_prob;
}