Improving Article Classifier Accuracy - machine-learning

I've built an article classifier based on Wikipedia data that I fetch, which comes from 5 total classifications.
They are:
Finance (15 articles) [1,0,0,0,0]
Sports (15 articles) [0,1,0,0,0]
Politics (15 articles) [0,0,1,0,0]
Science (15 articles) [0,0,0,1,0]
None (15 random articles not pertaining to the others) [0,0,0,0,1]
I went to wikipedia and grabbed about 15 pretty lengthy articles from each of these categories to build my corpus that I could use to train my network.
After building a lexicon of about 1000 words gathered from all of the articles, I converted each article to a word vector, along with the correct classifier label.
The word vector is a hot array, while the label is a one hot array.
For example, here is the representation of one article:
[
[0,0,0,1,0,0,0,1,0,0,... > 1000], [1,0,0,0] # this maps to Finance
]
So, in essence, I have this randomized list of word vectors mapped to their correct classifiers.
My network is a 3 layer, deep neural net that contains 500 nodes on each layer. I pass through the network over 30 epochs, and then just display how accurate my model is at the end.
Right now, Im getting about 53% to 55% accuracy. My question is, what can I do to get this up into the 90's? Is it even possible, or am I going to go crazy trying to train this thing?
Perhaps additionally, what is my main bottleneck so to speak?
edited per comments below
Neural networks aren't really designed to run best on single machines, they work much better if you have a cluster, or at least a production-grade machine. It's very common to eliminate the "long tail" of a corpus - if a term only appears in one document one time, then you may want to eliminate it. You may also want to apply some stemming so that you don't capture multiples of the same word. I strongly advise to you try applying TFIDF transformation to your corpus before pruning.
Network size optimization is a field unto itself. Basically, you try adding more/less nodes and see where that gets you. See the following for a technical discussion.
https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

It is impossible to know without seeing the data.
Things to try:
Transform your word vector to TFIDF. Are you removing stop words? You can add bi-grams/tri-grams to your word vector.
Add more articles - it could be difficult to separate them in such a small corpus. The length of a specific document doesn't necessarily help, you want to have more articles.
30 epochs feels very low to me.

Related

Number of backprops as performance metric for neural networks

I have been reading article about SRCNN and found that they are using "number of backprops" for evaluating how well network is performing, i.e. what network is able to learn after x backprops (as I understand). I would like to know what number of backprops actually means. Is this just the number of training data samples that there used during the training? Or maybe the number of mini-batches? Maybe it is one of the previous numbers multiplied by number of learnable parameters in the network? Or something completely different? Maybe there is some other more common name for this that I could loop up somewhere and read more about it because I was not able to find anything useful by searching "number of backprops" or "number of backpropagations"?
Bonus question: how widely this metric is used and how good is it?
I read their Paper from 2016:
author={C. Dong and C. C. Loy and K. He and X. Tang},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Image Super-Resolution Using Deep Convolutional Networks},
Since they don't even mention batches I assume they are doing a backpropagation to update their weights after each sample / image.
In other words their batchsize (mini-batchsize) is equal to 1 sample.
So number of backpropagations means amount of batches after all, which is quite a common metric, viz. in the paper PSNR (loss) over amount of batches (or usually loss over epochs).
Bonus question: I come to the conclusion they just didn't stick to the common thesaurus of machine learning, or deep learning.
BonusBonus question: They use the metric of loss after n batches to showcase how much the different network architectures could learn on trainigdatasets with different size.
I would assume that after it means how many the network has learned after back-propagating n times. Its more likely interchangeable with "after training over n samples..."
This maybe a bit different if they are using a recurrent network, as they could have more samples run in forward prop then in backwardprop. (For whatever reason I can't get the link to the paper to load, so unsure).
Based on your number of questions I think you might be overthinking this :)
Number of backprops is not a metric used commonly. Perhaps they use it here to showcase the speed of training based upon whatever optimization method's they are using. But for most common instances, it is not a relevant metric.

modeling feature set with text documents

Example:
I have m sets of ~1000 text documents, ~10 are predictive of a binary result, roughly 990 aren't.
I want to train a classifier to take a set of documents and predict the binary result.
Assume for discussion that the documents each map the text to 100 features.
How is this modeled in terms of training examples and features? Do I merge all the text together and map it to a fixed set of features? Do I have 100 features per document * ~1000 documents (100,000 features) and one training example per set of documents? Do I classify each document separately and analyze the resulting set of confidences as they relate to the final binary prediction?
The most common way to handle text documents is with a bag of words model. The class proportions are irrelevant. Each word gets mapped to a unique index. Make the value at that index equal to the number of times that token occurs (there are smarter things to do). The number of features/dimension is then the number of unique tokens/words in your corpus. There are manny issues with this, and some of them are discussed here. But it works well enough for many things.
I would want to approach it as a two stage problem.
Stage 1: predict the relevancy of a document from the set of 1000. For best combination with stage 2, use something probabilistic (logistic regression is a good start).
Stage 2: Define features on the output of stage 1 to determine the answer to the ultimate question. These could be things like the counts of words for the n most relevant docs from stage 1, the probability of the most probable document, the 99th percentile of those probabilities, variances in probabilities, etc. Whatever you think will get you the correct answer (experiment!)
The reason for this is as follows: concatenating documents together will drown you in irrelevant information. You'll spend ages trying to figure out which words/features allow actual separation between the classes.
On the other hand, if you concatenate feature vectors together, you'll run into an exchangeability problem. By that I mean, word 1 in document 1 will be in position 1, word 1 in document 2 will be in position 1001, in document 3 it will be in position 2001, etc. and there will be no way to know that the features are all related. Furthermore, an alternate presentation of the order of the documents would lead to the positions in the feature vector changing its order, and your learning algorithm won't be smart to this. Equally valid presentations of the document orders will lead to completely different results in an entirely non-deterministic and unsatisfying way (unless you spend a long time designing a custom classifier that's not afficted with this problem, which might ultimately be necessary but it's not the thing I'd start with).

Machine learning: Which algorithm is used to identify relevant features in a training set?

I've got a problem where I've potentially got a huge number of features. Essentially a mountain of data points (for discussion let's say it's in the millions of features). I don't know what data points are useful and what are irrelevant to a given outcome (I guess 1% are relevant and 99% are irrelevant).
I do have the data points and the final outcome (a binary result). I'm interested in reducing the feature set so that I can identify the most useful set of data points to collect to train future classification algorithms.
My current data set is huge, and I can't generate as many training examples with the mountain of data as I could if I were to identify the relevant features, cut down how many data points I collect, and increase the number of training examples. I expect that I would get better classifiers with more training examples given fewer feature data points (while maintaining the relevant ones).
What machine learning algorithms should I focus on to, first,
identify the features that are relevant to the outcome?
From some reading I've done it seems like SVM provides weighting per feature that I can use to identify the most highly scored features. Can anyone confirm this? Expand on the explanation? Or should I be thinking along another line?
Feature weights in a linear model (logistic regression, naive Bayes, etc) can be thought of as measures of importance, provided your features are all on the same scale.
Your model can be combined with a regularizer for learning that penalises certain kinds of feature vectors (essentially folding feature selection into the classification problem). L1 regularized logistic regression sounds like it would be perfect for what you want.
Maybe you can use PCA or Maximum entropy algorithm in order to reduce the data set...
You can go for Chi-Square tests or Entropy depending on your data type. Supervized discretization highly reduces the size of your data in a smart way (take a look into Recursive Minimal Entropy Partitioning algorithm proposed by Fayyad & Irani).
If you work in R, the SIS package has a function that will do this for you.
If you want to do things the hard way, what you want to do is feature screening, a massive preliminary dimension reduction before you do feature selection and model selection from a sane-sized set of features. Figuring out what is the sane-size can be tricky, and I don't have a magic answer for that, but you can prioritize what order you'd want to include the features by
1) for each feature, split the data in two groups by the binary response
2) find the Komogorov-Smirnov statistic comparing the two sets
The features with the highest KS statistic are most useful in modeling.
There's a paper "out there" titled "A selctive overview of feature screening for ultrahigh-dimensional data" by Liu, Zhong, and Li, I'm sure a free copy is floating around the web somewhere.
4 years later I'm now halfway through a PhD in this field and I want to add that the definition of a feature is not always simple. In the case that your features are a single column in your dataset, the answers here apply quite well.
However, take the case of an image being processed by a convolutional neural network, for example, a feature is not one pixel of the input, rather it's much more conceptual than that. Here's a nice discussion for the case of images:
https://medium.com/#ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721

What does dimensionality reduction mean?

What does dimensionality reduction mean exactly?
I searched for its meaning, I just found that it means the transformation of raw data into a more useful form. So what is the benefit of having data in useful form, I mean how can I use it in a practical life (application)?
Dimensionality Reduction is about converting data of very high dimensionality into data of much lower dimensionality such that each of the lower dimensions convey much more information.
This is typically done while solving machine learning problems to get better features for a classification or regression task.
Heres a contrived example - Suppose you have a list of 100 movies and 1000 people and for each person, you know whether they like or dislike each of the 100 movies. So for each instance (which in this case means each person) you have a binary vector of length 100 [position i is 0 if that person dislikes the i'th movie, 1 otherwise ].
You can perform your machine learning task on these vectors directly.. but instead you could decide upon 5 genres of movies and using the data you already have, figure out whether the person likes or dislikes the entire genre and, in this way reduce your data from a vector of size 100 into a vector of size 5 [position i is 1 if the person likes genre i]
The vector of length 5 can be thought of as a good representative of the vector of length 100 because most people might be liking movies only in their preferred genres.
However its not going to be an exact representative because there might be cases where a person hates all movies of a genre except one.
The point is, that the reduced vector conveys most of the information in the larger one while consuming a lot less space and being faster to compute with.
You're question is a little vague, but there's an interesting statistical technique that may be what you're thinking off called Principal Component Analysis which does something similar (and incidentally plotting the results from which was my first real world programming task)
It's a neat, but clever technique which is remarkably widely applicable. I applied it to similarities between protein amino acid sequences, but I've seen it used for analysis everything from relationships between bacteria to malt whisky.
Consider a graph of some attributes of a collection of things where one has two independent variables - to analyse the relationship on these one obviously plots on two dimensions and you might see a scatter of points. if you've three variable you can use a 3D graph, but after that one starts to run out of dimensions.
In PCA one might have dozens or even a hundred or more independent factors, all of which need to be plotted on perpendicular axis. Using PCA one does this, then analyses the resultant multidimensional graph to find the set of two or three axis within the graph which contain the largest amount of information. For example the first Principal Coordinate will be a composite axis (i.e. at some angle through n-dimensional space) which has the most information when the points are plotted along it. The second axis is perpendicular to this (remember this is n-dimensional space, so there's a lot of perpendiculars) which contains the second largest amount of information etc.
Plotting the resultant graph in 2D or 3D will typically give you a visualization of the data which contains a significant amount of the information in the original dataset. It's usual for the technique to be considered valid to be looking for a representation that contains around 70% of the original data - enough to visualize relationships with some confidence that would otherwise not be apparent in the raw statistics. Notice that the technique requires that all factors have the same weight, but given that it's an extremely widely applicable method that deserves to be more widely know and is available in most statistical packages (I did my work on an ICL 2700 in 1980 - which is about as powerful as an iPhone)
http://en.wikipedia.org/wiki/Dimension_reduction
maybe you have heard of PCA (principle component analysis), which is a Dimension reduction algorithm.
Others include LDA, matrix factorization based methods, etc.
Here's a simple example. You have a lot of text files and each file consists some words. There files can be classified into two categories. You want to visualize a file as a point in a 2D/3D space so that you can see the distribution clearly. So you need to do dimension reduction to transfer a file containing a lot of words into only 2 or 3 dimensions.
The dimensionality of a measurement of something, is the number of numbers required to describe it. So for example the number of numbers needed to describe the location of a point in space will be 3 (x,y and z).
Now lets consider the location of a train along a long but winding track through the mountains. At first glance this may appear to be a 3 dimensional problem, requiring a longitude, latitude and height measurement to specify. But this 3 dimensions can be reduced to one if you just take the distance travelled along the track from the start instead.
If you were given the task of using a neural network or some statistical technique to predict how far a train could get given a certain quantity of fuel, then it will be far easier to work with the 1 dimensional data than the 3 dimensional version.
It's a technique of data mining. Its main benefit is that it allows you to produce a visual representation of many-dimensional data. The human brain is peerless at spotting and analyzing patterns in visual data, but can process a maximum of three dimensions (four if you use time, i.e. animated displays) - so any data with more than 3 dimensions needs to somehow compressed down to 3 (or 2, since plotting data in 3D can often be technically difficult).
BTW, a very simple form of dimensionality reduction is the use of color to represent an additional dimension, for example in heat maps.
Suppose you're building a database of information about a large collection of adult human beings. It's also going to be quite detailed. So we could say that the database is going to have large dimensions.
AAMOF each database record will actually include a measure of the person's IQ and shoe size. Now let's pretend that these two characteristics are quite highly correlated. Compared to IQs shoe sizes may be easy to measure and we want to populate the database with useful data as quickly as possible. One thing we could do would be to forge ahead and record shoe sizes for new database records, postponing the task of collecting IQ data for later. We would still be able to estimate IQs using shoe sizes because the two measures are correlated.
We would be using a very simple form of practical dimension reduction by leaving IQ out of records initially. Principal components analysis, various forms of factor analysis and other methods are extensions of this simple idea.

Neural networks for email spam detection

Let's say you have access to an email account with the history of received emails from the last years (~10k emails) classified into 2 groups
genuine email
spam
How would you approach the task of creating a neural network solution that could be used for spam detection - basically classifying any email either as spam or not spam?
Let's assume that the email fetching is already in place and we need to focus on classification part only.
The main points which I would hope to get answered would be:
Which parameters to choose as the input for the NN, and why?
What structure of the NN would most likely work best for such task?
Also any resource recommendations, or existing implementations (preferably in C#) are more than welcome
Thank you
EDIT
I am set on using neural networks as the main aspect on the project is to test how the NN approach would work for spam detection
Also it is a "toy problem" simply to explore subject on neural networks and spam
If you insist on NNs... I would calculate some features for every email
Both Character-Based, Word-based, and Vocabulary features (About 97 as I count these):
Total no of characters (C)
Total no of alpha chars / C Ratio of alpha chars
Total no of digit chars / C
Total no of whitespace chars/C
Frequency of each letter / C (36 letters of the keyboard – A-Z, 0-9)
Frequency of special chars (10 chars: *, _ ,+,=,%,$,#,ـ , \,/ )
Total no of words (M)
Total no of short words/M Two letters or less
Total no of chars in words/C
Average word length
Avg. sentence length in chars
Avg. sentence length in words
Word length freq. distribution/M Ratio of words of length n, n between 1 and 15
Type Token Ratio No. Of unique Words/ M
Hapax Legomena Freq. of once-occurring words
Hapax Dislegomena Freq. of twice-occurring words
Yule’s K measure
Simpson’s D measure
Sichel’s S measure
Brunet’s W measure
Honore’s R measure
Frequency of punctuation 18 punctuation chars: . ، ; ? ! : ( ) – “ « » < > [ ] { }
You could also add some more features based on the formatting: colors, fonts, sizes, ... used.
Most of these measures can be found online, in papers, or even Wikipedia (they're all simple calculations, probably based on the other features).
So with about 100 features, you need 100 inputs, some number of nodes in a hidden layer, and one output node.
The inputs would need to be normalized according to your current pre-classified corpus.
I'd split it into two groups, use one as a training group, and the other as a testing group, never mixing them. Maybe at a 50/50 ratio of train/test groups with similar spam/nonspam ratios.
Are you set on doing it with a Neural Network? It sounds like you're set up pretty well to use Bayesian classification, which is outlined well in a couple of essays by Paul Graham:
A Plan for Spam
Better Bayesian Filtering
The classified history you have access to would make very strong corpora to feed to a Bayesian algorithm, you'd probably end up with quite an effective result.
You'll basically have an entire problem, of similar scope to designing and training the neural net, of feature extraction. Where I would start, if I were you, is in slicing and dicing the input text in a large number of ways, each one being a potential feature input along the lines of "this neuron signals 1.0 if 'price' and 'viagra' occur within 3 words of each other", and culling those according to best absolute correlation with spam identification.
I'd start by taking my best 50 to 200 input feature neurons and hooking them up to a single output neuron (values trained for 1.0 = spam, -1.0 = not spam), i.e. a single-layer perceptron. I might try a multi-layer backpropagation net if that worked poorly, but wouldn't be holding my breath for great results.
Generally, my experience has led me to believe that neural networks will show mediocre performance at best in this task, and I'd definitely recommend something Bayesian as Chad Birch suggests, if this is something other than a toy problem for exploring neural nets.
Chad, the answers you've gotten so far are reasonable, but I'll respond to your update that:
I am set on using neural networks as the main aspect on the project is to test how the NN approach would work for spam detection.
Well, then you have a problem: an empirical test like this can't prove unsuitability.
You're probably best off learning a bit about what NN actually do and don't do, to see why they are not a particularly good idea for this sort of classification problem. Probably a helpful way to think about them is as universal function approximators. But for some idea of how this all fits together in the area of classification (which is what the spam filtering problem is), browsing an intro text like pattern classification might be helpful.
Failing that if you are dead set on seeing it run, just use any general NN library for the network itself. Most of your issue is going to be how to represent the input data anyway. The `best' structure is non-obvious, and it probably doesn't matter that much. The inputs are going to have to be a number of (normalized) measurements (features) on the corpus itself. Some are obvious (counts of 'spam' words, etc), some much less so. This is the part you can really play around with, but you should expect to do poorly compared to Bayesian filters (which have their own problems here) due to the nature of the problem.

Resources