How to encode categorical Variables to pass to SVM - machine-learning

I am doing some NLP tasks. A feature in my list is the POS tag of a sentence. How can i pass the POS tags as a feature to SVM as it expects numerical data.

You can create Map/Dictionary to convert each POS Tag to Number.
List all POS Tags and assign particular Number to it.
For Example,
NN -> 1
JJ -> 2
VB -> 3
DT -> 4
...
...
Whenever you encounter a particular POS, change it to its corresponding number.

Related

Stata timeseries rolling forecast

I'm new to Stata and have a question about its command language. I want to use my ARIMA model to forecast, ie use x[t], x[t-1]... to produce an estimate xhat[t+1], and then roll forward one time step, to make the next forecast, rebuilding the model every N time steps.
i can duplicate code, something like the following code for T, T+1, T+2, etc.:
arima x if t<=T, arima(2,0,2)
predict xhat
to produce a series of xhats to compare with in-sample x observations. There must be a more natural way to do this in the command language. any suggestions, pointers would be very much appreciated.
Posting a working solution provided by Stata tech support:
webuse dfex
tsset month
generate int id = _n
capture program drop forecarima
program forecarima, rclass
syntax [if]
tempvar yhat
arima unemp `if', arima(1,1,0)
local T = e(tmax)
local T1 = `T' + 1
summarize id if month == `T1'
local h = r(max)
predict `yhat', y dynamic(`T')
return scalar y = unemp[`h']
return scalar yhat = `yhat'[`h']
end
rolling unemp = r(y) unemp_hat = r(yhat), window(400) recursive ///
saving(results,replace): forecarima
use results,clear
browse
this provides output with the prediction and observed both available. the dates are off by one step, but easier left to post-processing.

Feature selection using statistical model

Problem statement :
I am working on a problem where i have to predict if customer will opt for loan or not.I have converted all available data types (object,int) into integer and now my data looks like below.
The highlighted column is my Target column where
0 means Yes
1 means No
There are 47 independent column in this data set.
I want to do feature selection on these columns against my Target column!!
I started with Z-test
import numpy as np
import scipy.stats as st
import scipy.special as sp
def feature_selection_pvalue(df,col_name,samp_size=1000):
relation_columns=[]
no_relation_columns=[]
H0='There is no relation between target column and independent column'
H1='There is a relation between target column and independent column'
sample_data[col_name]=df[col_name].sample(samp_size)
samp_mean=sample_data[col_name].mean()
pop_mean=df[col_name].mean()
pop_std=df[col_name].std()
print (pop_mean)
print (pop_std)
print (samp_mean)
n=samp_size
q=.5
#lets calculate z
#z = (samp_mean - pop_mean) / np.sqrt(pop_std*pop_std/n)
z = (samp_mean - pop_mean) / np.sqrt(pop_std*pop_std / n)
print (z)
pval = 2 * (1 - st.norm.cdf(z))
print ('p values is==='+str(pval))
if pval< .05 :
print ('Null hypothesis is Accepted for col ---- >'+H0+col_name)
no_relation_columns.append(col_name)
else:
print ('Alternate Hypothesis is accepted -->'+H1)
relation_columns.append(col_name)
print ('length of list ==='+str(len(relation_columns)))
return relation_columns,no_relation_columns
When i run this function , i always gets different results
for items in df.columns:
relation,no_relation=feature_selection_pvalue(df,items,5000)
My question is
is above z-Test a reliable mean to do feature selection, when result differs each time
What would be a better approach in this case to do feature selection, if possible provide an example
What would be a better approach in this case to do feature selection,
if possible provide an example
Are you able to use scikit ? They are offering a lot of examples and possibilites to selection your features:
https://scikit-learn.org/stable/modules/feature_selection.html
If we look at the first one (Variance threshold):
from sklearn.feature_selection import VarianceThreshold
X = df[['age', 'balance',...]] #select your columns
sel = VarianceThreshold(threshold=(.8 * (1 - .8)))
X_red = sel.fit_transform(X)
this will only keep the columns which have some variance and not have only the same value in it for example.

Autoencoder for Character Time-Series with deeplearning4j

I'm trying to create and train an LSTM Autoencoder on character sequences (strings). This is simply for dimensionality reduction, i.e. to be able to represent strings of up to T=1000 characters as fixed-length vectors of size N. For the sake of this example, let N = 10. Each character is one-hot encoded by arrays of size validChars (in my case validChars = 77).
I'm using ComputationalGraph in be able to later remove decoder layers and use remaining for encoding. By looking at dl4j-examples I have come up with this:
ComputationGraphConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(12345)
.l2(0.0001)
.weightInit(WeightInit.XAVIER)
.updater(new Adam(0.005))
.graphBuilder()
.addInputs("input")
.addLayer("encoder1", new LSTM.Builder().nIn(dictSize).nOut(250)
.activation(Activation.TANH).build(), "input")
.addLayer("encoder2", new LSTM.Builder().nIn(250).nOut(10)
.activation(Activation.TANH).build(), "encoder1")
.addVertex("fixed", new PreprocessorVertex(new RnnToFeedForwardPreProcessor()), "encoder2")
.addVertex("sequenced", new PreprocessorVertex(new FeedForwardToRnnPreProcessor()), "fixed")
.addLayer("decoder1", new LSTM.Builder().nIn(10).nOut(250)
.activation(Activation.TANH).build(), "sequenced")
.addLayer("decoder2", new LSTM.Builder().nIn(250).nOut(dictSize)
.activation(Activation.TANH).build(), "decoder1")
.addLayer("output", new RnnOutputLayer.Builder()
.lossFunction(LossFunctions.LossFunction.MCXENT)
.activation(Activation.SOFTMAX).nIn(dictSize).nOut(dictSize).build(), "decoder2")
.setOutputs("output")
.backpropType(BackpropType.TruncatedBPTT).tBPTTForwardLength(tbpttLength).tBPTTBackwardLength(tbpttLength)
.build();
With this, I expected the number of features to follow the path:
[77,T] -> [250,T] -> [10,T] -> [10] -> [10,T] -> [250, T] -> [77,T]
I have trained this network, and removed decoder part like so:
ComputationGraph encoder = new TransferLearning.GraphBuilder(net)
.setFeatureExtractor("fixed")
.removeVertexAndConnections("sequenced")
.removeVertexAndConnections("decoder1")
.removeVertexAndConnections("decoder2")
.removeVertexAndConnections("output")
.addLayer("output", new ActivationLayer.Builder().activation(Activation.IDENTITY).build(), "fixed")
.setOutputs("output")
.setInputs("input")
.build();
But, when I encode a string of length 1000 with this encoder, it outputs an NDArray of shape [1000, 10], instead of 1-dimensional vector of length 10. My purpose is to represent the whole 1000 character sequence with one vector of length 10. What am I missing?
Nobody answered the question, I found the answer in the dl4j-examples. So, will post it anyway in case it might be helpful to someone.
The part between encoder and decoder LSTMs should look like so:
.addVertex("thoughtVector",
new LastTimeStepVertex("encoderInput"), "encoder")
.addVertex("duplication",
new DuplicateToTimeSeriesVertex("decoderInput"), "thoughtVector")
.addVertex("merge",
new MergeVertex(), "decoderInput", "duplication")
It is important that we do many-to-one by using LastTimeStep, and then we do one-to-many by using DuplicateToTimeSeries. This way 'thoughtVector' actually is a single vector representation of the whole sequence.
See full example here: https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/encdec/EncoderDecoderLSTM.java, but note that example deals with word-level sequences. My net above works with character-level sequences, but the idea is the same.

How do I form a feature vector for a classifier targeted at Named Entity Recognition?

I have a set of tags (different from the conventional Name, Place, Object etc.). In my case, they are domain-specific and I call them: Entity, Action, Incident. I want to use these as a seed for extracting more named-entities.
I came across this paper: "Efficient Support Vector Classifiers for Named Entity Recognition" by Isozaki et al. While I like the idea of using Support Vector Machines for doing named-entity recognition, I am stuck on how to encode the feature vector. For their paper, this is what they say:
For instance, the words in “President George Herbert Bush said Clinton
is . . . ” are classified as follows: “President” = OTHER, “George” =
PERSON-BEGIN, “Herbert” = PERSON-MIDDLE, “Bush” = PERSON-END, “said” =
OTHER, “Clinton” = PERSON-SINGLE, “is”
= OTHER. In this way, the first word of a person’s name is labeled as PERSON-BEGIN. The last word is labeled as PERSON-END. Other words in
the name are PERSON-MIDDLE. If a person’s name is expressed by a
single word, it is labeled as PERSON-SINGLE. If a word does not
belong to any named entities, it is labeled as OTHER. Since IREX de-
fines eight NE classes, words are classified into 33 categories.
Each sample is represented by 15 features because each word has three
features (part-of-speech tag, character type, and the word itself),
and two preceding words and two succeeding words are also used for
context dependence. Although infrequent features are usually removed
to prevent overfitting, we use all features because SVMs are robust.
Each sample is represented by a long binary vector, i.e., a sequence
of 0 (false) and 1 (true). For instance, “Bush” in the above example
is represented by a vector x = x[1] ... x[D] described below. Only
15 elements are 1.
x[1] = 0 // Current word is not ‘Alice’
x[2] = 1 // Current word is ‘Bush’
x[3] = 0 // Current word is not ‘Charlie’
x[15029] = 1 // Current POS is a proper noun
x[15030] = 0 // Current POS is not a verb
x[39181] = 0 // Previous word is not ‘Henry’
x[39182] = 1 // Previous word is ‘Herbert
I don't really understand how the binary vector here is being constructed. I know I am missing a subtle point but can someone help me understand this?
There is a bag of words lexicon building step that they omit.
Basically you have build a map from (non-rare) words in the training set to indicies. Let's say you have 20k unique words in your training set. You'll have mapping from every word in the training set to [0, 20000].
Then the feature vector is basically a concatenation of a few very sparse vectors that have a 1 corresponding to a particular word, and 19,999 0s, and then 1 for a particular POS, and 50 other 0s for non-active POS. This is generally called a one hot encoding. http://en.wikipedia.org/wiki/One-hot
def encode_word_feature(word, POStag, char_type, word_index_mapping, POS_index_mapping, char_type_index_mapping)):
# it makes a lot of sense to use a sparsely encoded vector rather than dense list, but it's clearer this way
ret = empty_vec(len(word_index_mapping) + len(POS_index_mapping) + len(char_type_index_mapping))
so_far = 0
ret[word_index_mapping[word] + so_far] = 1
so_far += len(word_index_mapping)
ret[POS_index_mapping[POStag] + so_far] = 1
so_far += len(POS_index_mapping)
ret[char_type_index_mapping[char_type] + so_far] = 1
return ret
def encode_context(context):
return encode_word_feature(context.two_words_ago, context.two_pos_ago, context.two_char_types_ago,
word_index_mapping, context_index_mapping, char_type_index_mapping) +
encode_word_feature(context.one_word_ago, context.one_pos_ago, context.one_char_types_ago,
word_index_mapping, context_index_mapping, char_type_index_mapping) +
# ... pattern is obvious
So your feature vector is about size 100k with a little extra for POS and char tags, and is almost entirely 0s, except for 15 1s in positions picked according to your feature to index mappings.

Weka normalizing columns

I have an ARFF file containing 14 numerical columns. I want to perform a normalization on each column separately, that is modifying the values from each colum to (actual_value - min(this_column)) / (max(this_column) - min(this_column)). Hence, all values from a column will be in the range [0, 1]. The min and max values from a column might differ from those of another column.
How can I do this with Weka filters?
Thanks
This can be done using
weka.filters.unsupervised.attribute.Normalize
After applying this filter all values in each column will be in the range [0, 1]
That's right. Just wanted to remind about the difference of "normalization" and "standardization". What mentioned in the question is "standardization", while "normalization" assumes Gaussian distribution and normalizes by mean, and standard variation of each attribute. If you have an outlier in your data, the standardize filter might hurt your data distribution as the min, or max might be much farther than the other instances.
In this case, we can use weka.filters.unsupervised.attribute.Normalize filter to normalize but if we want to normalize only some columns the following will be the best approach.
To apply normalize on selected columns
The unsupervised.attribute.PartitionedMultiFilter can be used for this task.
Thereby you have to configure the filters and ranges sections as per your need.
For Ex: If I want to normalize only on humidity attribute
Step 01 :
After adding the ParririonedMultiFilter -> Tap on filter text box -> choose Normalize from weka.filters.unsupervised.attribute.Normalize -> And edit the Normalize filter as of your need(by giving the scale and translation values)
Step 02:
Tap on ranges text box -> Delete the default filter( which is first-last) -> Then add the column number you want to filter -> Click ok -> Click on Apply
Now the filter will be added only to the selected(humidity) column.
Here is the working normalization example with K-Means in JAVA.
final SimpleKMeans kmeans = new SimpleKMeans();
final String[] options = weka.core.Utils
.splitOptions("-init 0 -max-candidates 100 -periodic-pruning 10000 -min-density 2.0 -t1 -1.25 -t2 -1.0 -N 10 -A \"weka.core.EuclideanDistance -R first-last\" -I 500 -num-slots 1 -S 50");
kmeans.setOptions(options);
kmeans.setSeed(10);
kmeans.setPreserveInstancesOrder(true);
kmeans.setNumClusters(25);
kmeans.setMaxIterations(1000);
final BufferedReader datafile = new BufferedReader(new FileReader("/Users/data.arff");
Instances data = new Instances(datafile);
//normalize
final Normalize normalizeFilter = new Normalize();
normalizeFilter.setInputFormat(data);
data = Filter.useFilter(data, normalizeFilter);
//remove class column[0] from cluster
data.setClassIndex(0);
final Remove removeFilter = new Remove();
removeFilter.setAttributeIndices("" + (data.classIndex() + 1));
removeFilter.setInputFormat(data);
data = Filter.useFilter(data, removeFilter);
kmeans.buildClusterer(data);
System.out.println(kmeans.toString());
// evaluate clusterer
final ClusterEvaluation eval = new ClusterEvaluation();
eval.setClusterer(kmeans);
eval.evaluateClusterer(data);
System.out.println(eval.clusterResultsToString());
If you have CSV file then replace BufferedReader line above with below mentioned Datasource:
final DataSource source = new DataSource("/Users/data.csv");
final Instances data = source.getDataSet();

Resources