Classifying to three classes using a single output - machine-learning

I'm using LSTM for sentiment classification and I have three optional classes - negative/positive/neutral.
I wonder if there's a way to do this classification using a single output that will be in the range of -1:1 while -1 is the neutral class, 0 is the negative one and 1 is the positive class.
I know that sigmoid function goes from 0 to 1 and tanh from -1 to 1 so working with tanh is probably a good lead, but still does it make any sense using single output to classify to three different classes?

Basically it's rather more useful to have three units for each class rather than one - which in the end gives you a score helpful in assigning appropriate class. The intuition behind this is the following :
A final unit (tanh or sigmoid) - should receive in it's input a huge negative number in case of negative class and a huge positive number in case of positive class.
So among units providing input too this final unit you should have neurons which are providing strong input in case of positive and negative classes.
Now take a look at neutral case - then there also should be units which will provide you a certain input to final unit in case to keep input near 0. This needs additional complexity of the net and can hurt a training because this units should not only recognize the neutral inputs but also make inputs from other units less significant.
So, to sum up - to have an efficient model which works with final output between -1 :1 you need to have the same units in your network as will do the job in case with three outputs. Moreover - you need to add additional complexity which will make the outputs from "positive" and "negative" units less significant in neutral case. So - that would rather hurt your training than help in anything.

Related

How to handle unblanced labels in Multilabel Classification?

These oversimplified example target vectors (in my use case each 1 represents a product that a client bought at least once a month)
[1,1,1,0,0,0,0,0,1,0,0,0]
[1,1,1,0,0,0,0,0,0,0,0,0]
[0,1,0,0,0,0,1,0,0,0,0,0]
[1,0,1,0,0,0,0,0,0,0,0,0]
[1,1,1,0,0,0,0,0,1,0,0,0]
[1,1,0,0,0,0,0,0,0,0,0,0]
[1,1,0,0,0,1,0,0,0,0,1,0]
contain labels that are far more sparse than others. This means the target vectors contain some products that are almost always bought and many that are seldomly bought.
In training the ANN (For activation the input layer uses sigmoid and the output layer sigmoid. The lossfct is binary_crossentropy. What the features to predict the target vector exactly are, is not really relevant here I think.) only learns that putting 1 in the first 3 labels and 0 for the rest is good. I want the model not to learn this pattern, obviously. Also as a side note, I am more interested in true positives in the sparse labels than in the frequent labels. How should I handle this issue?
My only idea would be to exclude the frequent labels in the target vectors entirely, but this would only be my last resort.
There are two things I would try in this situation:
Add dropout layers (or some other layers that would descrease the dependence on certain neurons)
Use Oversampling or Undersampling technics. In this case it would increase the data from classes less represented (or decrease the data from classes over represented)
But overall, I think regularization would be more effective.

Neural network input with exponential decay

Often, to improve learning rates, inputs to a neural network are preprocessed by scaling and shifting to be between -1 and 1. I'm wondering though if that's a good idea with an input whose graph would be exponentially decaying. For instance, if I had an input with integer values 0 to 100 distributed with the majority of inputs being 0 and smaller values being more common than large values, with 99 being very rare.
Seems that scaling them and shifting wouldn't be ideal, since now the most common value would be -1. How is this type of input best dealt with?
Consider you're using a sigmoid activation function which is symmetric around the origin:
The trick to speed up convergence is to have the mean of the normalized data set be 0 as well. The choice of activation function is important because you're not only learning weights from the input to the first hidden layer, i.e. normalizing the input is not enough: the input to the second hidden layer/output is learned as well and thus needs to obey the same rule to be consequential. In the case of non-input layers this is done by the activation function. The much cited Efficient Backprop paper by Lecun summarizes these rules and has some nice explanations as well which you should look up. Because there's other things like weight and bias initialization that one should consider as well.
In chapter 4.3 he gives a formula to normalize the inputs in a way to have the mean close to 0 and the std deviation 1. If you need more sources, this is great faq as well.
I don't know your application scenario but if you're using symbolic data and 0-100 is ment to represent percentages, then you could also apply softmax to the input layer to get better input representations. It's also worth noting that some people prefer scaling to [.1,.9] instead of [0,1]
Edit: Rewritten to match comments.

Neural Network Normalization of Nominal Data for 1 Output Neuron

I am new to machine learning and AI and started with NN recently.
Already got some information here on stackoverflow, but I don't understand the logic from the whole gathered information at the moment.
Let's take 4 nominal (but not ordinal) values [A, B, C, D] and 2 numericals already normalized [0.35, 0.55] - so 2 input neurons, one for nominal one for numerical.
I mostly see in NN literature you have to use 4 input neurons for encoding. But I don't need it to predict those nominal ones. I have only one output neuron that represents at most a relationship in the way if I would use it with expert systems and rules.
If I would normalize them to [0.2, 0.4, 0.6, 0.8] for example, isn't the NN able to distinguish between them? For the NN it's only a number, isn't it?
Naive approach and thinking:
A with 0.35 numerical leads to ideal 1.
B with 0.55 numerical leads to ideal 0.
C with 0.35 numerical leads to ideal 0.
D with 0.55 numerical leads to ideal 1.
Is there a mistake in my way of thinking about this approach?
Additional info (edit):
Those nominal values are included in decision making (significance if measured with statistics tools by combining with the numerical values), depends if they are true or not. I know they can be encoded binary, but the list of nominal values is a litte bit larger.
Other example:
Symptom A with blood test 1 leads to diagnosis X (the ideal)
Symptom B with blood test 1 leads to diagnosys Y (the ideal)
Actually expert systems are used. Symptoms are nominal values, but in combination with the blood test value you get the diagnosis. The main question finally: Do I have to encode symptoms in binary way or can I replace symptoms with numbers? If I can't replace it with numbers, why binary representation is the only way in usage of a NN?
INPUTS
Theoretically it doesn't really matter how do you encode your inputs. As long as different samples will be represented by different points in the input space it is possible to separate them with a line - and that what's the input layer (if it's linear) is doing - it combines the inputs linearly. However, the way the data is laid out in the input space can have huge impact on convergence time during learning. A simple way to see this is this: imagine a set of lines crossing the origin in the 2D space. If your data is scattered around the origin, then it is likely that some of these lines will separate data into parts, and few "moves" will be required, especially if the data is linearly separable. On the other hand, if your input data is dense and far from the origin, then most of initial input discrimination lines won't even "hit" the data. So it will require a large number of weight updates to reach the data, and the large amount of precise steps to "cut" it into initial categories.
OUTPUTS
If you have categories then encoding them as binary is quite important. Imagine that you have three categories: A, B and C. If you encode them with two three neurons as 1;0;0, 0;1;0 and 0;0;1 then during learning and later with noisy data a point about which network is "not sure" can end up as 0.5;0.0;0.5 on the output layer. That makes sense, if it is really something conceptually between A and C, but surely not B. If you'd choose one output neuron end encode A, B and C as 1, 2 and 3, then for the same situation the network would give an input of average between 1 and 3 which gives you 2! So the answer would be "definitely B" - clearly wrong!
Reference:
ftp://ftp.sas.com/pub/neural/FAQ.html

How to pre-process dataset for maximum effectiveness with LibSVM Weka implementation

So I read a paper that said that processing your dataset correctly can increase LibSVM classification accuracy dramatically...I'm using the Weka implementation and would like some help making sure my dataset is optimal.
Here are my (example) attributes:
Power Numeric (real numbers, range is from 0 to 1.5132, 9000+ unique values)
Voltage Numeric (similar to Power)
Light Numeric (0 and 1 are the only 2 possible values)
Day Numeric (1 through 20 are the possible values, equal number of each value)
Range Nominal {1,2,3,4,5} <----these are the classes
My question is: which Weka pre-processing filters should I apply to make this dataset more effective for LibSVM?
Should I normalize and/or standardize the Power and Voltage data values?
Should I use a Discretization filter on anything?
Should I be binning the Power/Voltage values into a lot smaller number of bins?
Should I make the Light value Binary instead of numeric?
Should I normalize the Day values? Does it even make sense to do that?
Should I be using the Nominal to Binary or Nominal to some thing else filter for the classes "Range"?
Please advice on these questions and anything else you think I might have missed...
Thanks in advance!!
Normalization is very important, as it influences the concept of distance which is used by SVM. The two main approaches to normalization are:
Scale each input dimension to the same interval, for example [0, 1]. This is the most common approach by far. It is necessary to prevent some input dimensions to completely dominate others. Recommended by the LIBSVM authors in their beginner's guide (Appendix B for examples).
Scale each instance to a given length. This is common in text mining / computer vision.
As to handling types of inputs:
Continuous: no work needed, SVM works on these implicitly.
Ordinal: treat as continuous variables. For example cold, lukewarm, hot could be modeled as 1, 2, 3 without implicitly defining an unnatural structure.
Nominal: perform one-hot encoding, e.g. for an input with N levels, generate N new binary input dimensions. This is necessary because you must avoid implicitly defining a varying distance between nominal levels. For example, modelling cat, dog, bird as 1, 2 and 3 implies that a dog and bird are more similar than a cat and bird which is nonsense.
Normalization must be done after substituting inputs where necessary.
To answer your questions:
Should I normalize and/or standardize the Power and Voltage data
values?
Yes, standardize all (final) input dimensions to the same interval (including dummies!).
Should I use a Discretization filter on anything?
No.
Should I be binning the Power/Voltage values into a lot smaller number of
bins?
No. Treat them as continuous variables (e.g. one input each).
Should I make the Light value Binary instead of numeric?
No, SVM has no concept of binary variables and treats everything as numeric. So converting it will just lead to an extra type-cast internally.
Should I normalize the Day values? Does it even make sense to do
that?
If you want to use 1 input dimension, you must normalize it just like all others.
Should I be using the Nominal to Binary or Nominal to some thing else filter for the classes "Range"?
Nominal to binary, using one-hot encoding.

Neural network classifier

When you have 2 classes A with 2 elements and B with one element in 1D space in any configuration. Task is to distinguish between the two classes, to classify them. If you can choose arbitrary activation function, what is the minimal number of neurons that can solve this.
I am thinking that you always have to use at least two neurons or am I wrong?
Your question is somewhat related to the classical XOR problem for perceptrons. Let us suppose for a moment, that it's about a neural network with the specific activation function - binary threshold - which perceptron has. Then the task turns into 1D XOR problem, and then indeed you need 2 neurons in hidden layer and 1 neuron in output layer to solve it. But you mention that an arbitrary activation function can be chosen. In this case we can choose radial basis function (RBF) network. If it is possible to denote class A as output value greater than T and class B as output value less than T, then only 1 RBF neuron will suffice to distinguish the classes. If you want every class to have its own output (which value can be treated as a probability measure of input data belonging to corresponding class), then you need 2 RBF neurons.

Resources