I am working on a naive bayes classifier that takes a bunch of user profile data such as:
Name
City
State
School
Email Address
URLS { ... }
The last bit is a bunch of urls that are search results for the user gathered by a google search for the user by name. The objective is to decide if the search result is accurate(ie. it is about the person) or inaccurate. In order to do this, each piece of the profile data is searched within each link in the url array and a binary value is assigned per attribute if that profile data (ex. City) is matched on a page. The results are then represented as a vector of binaries (ie. 1 0 0 0 1 means Name and Email address was matched on the url).
My questions revolves around creating the optimal training set. If a person's profile has incomplete information (such as missing email adddress), should that be a good profile to use in my training set? Should I be only training on profiles with full training information? Would it make sense to make different training sets (one for each combination of complete profile attributes) and then when i am given a user's url to test with, i determine which training set to use based on how much user profile is on record for the test person? How can i go about this?
In general, there is no "should". Whichever way you create a model, the only thing which matters is its performance, no matter how you created it.
However, it is highly unlikely you'd be able to create a proper model with hand-picked training set. The simple idea is that you should train your model on data which looks exactly like live data. Will live data have missing values, incomplete profiles etc? So, you need your model to know what to do in such situations and, therefore, such profiles should be in the training set.
Yes, certainly, you can make a model composed of several sub-models, however you might run into problems with having not enough training data and overfitting. You'll have to create multiple good models for it to work, which is harder. I suppose it would be better to leave this reasoning to the model itself rather than trying to hand-hold it into the right direction, this is what machine learning is for - save you the trouble... But there is really no way to say before trying it on your data set. Again, whatever works in your particular case is right.
Because you're using Naive Bayes as your model (and only because of that) you can benefit from the independence assumption to use every piece of data you have available and only consider those present in the new sample.
You have features f1...fn, some of which may or may not be present in any given entry. The posterior probability p( relevant | f_1 ... f_n ) decomposes as:
p( relevant | f_1 ... f_n ) \propto p( relevant ) * p( f_1 | relevant ) * p( f_2 | relevant ) ... p(f_n | relevant )
p( irrelevant | f_1...f_n ) is similar. If some particular f_i isn't present, just drop the terms from the two posteriors---given that they're defined over the same feature space probabilities are comparable, and can be normalised in the standard way. All you then need is to estimate the terms p( f_i | relevant ): this is simply the fraction of the relevant links where the i_th feature is 1 (possibly smoothed). To estimate this parameter simply use the set of relevant links where the i-th feature is defined.
This is only going to work if you implement yourself, as I don't think you can do this with a standard package, but given how easy it is to implement I wouldn't be concerned.
Edit: an example
Imagine you have the following features and data (they're binary, since you say that's what you have, but the extension to categorical or continuous is not difficult, I hope):
D = [ {email: 1, city: 1, name: 1, RELEVANT: 1},
{city: 1, name: 1, RELEVANT: 0},
{city: 0, email: 0, RELEVANT: 0}
{name: 1, city: 0, email: 1, RELEVANT: 1} ]
where each element of the list is an instance, and the target variable for classification is the special RELEVANT field (note that some of these instances have some variables missing).
You then want to classify the following instance, missing the RELEVANT field since that's what you're hoping to predict:
t = {email: 0, name: 1}
The posterior probability
p(RELEVANT=1 | t) = [p(RELEVANT=1) * p(email=0|RELEVANT=1) * p(name=1|RELEVANT)] / evidence(t)
while
p(RELEVANT=0 | t) = [p(RELEVANT=0) * p(email=0|RELEVANT=0) * p(name=1|RELEVANT=0)] / evidence(t)
where evidence(t) is just a normaliser obtained by summing the two numerators above.
To get each of the parameters of the form p(email=0|RELEVANT=1), look at the fraction of training instances where RELEVANT=1 which have email=0:
p(email=0|RELEVANT=1) = count(email=0,RELEVANT=1) / [count(email=0,RELEVANT=1) + count(email=1,RELEVANT=1)].
Notice that this term simply ignores instances for which email is not defined.
In this instance, the posterior probability of relevance goes to zero because the count(email=0,RELEVANT=1) is zero. So I would suggest using a smoothed estimator where you add one to every count, so that:
p(email=0|RELEVANT=1) = [count(email=0,RELEVANT=1)+1] / [count(email=0,RELEVANT=1) + count(email=1,RELEVANT=1) + 2].
Related
IF we are not sure about the nature of categorical features like whether they are nominal or ordinal, which encoding should we use? Ordinal-Encoding or One-Hot-Encoding?
Is there a clearly defined rule on this topic?
I see a lot of people using Ordinal-Encoding on Categorical Data that doesn't have a Direction.
Suppose a frequency table:
some_data[some_col].value_counts()
[OUTPUT]
color_white 11413
color_green 4544
color_black 1419
color_orang 3
Name: shirt_colors, dtype: int64
There are a lots of guys who are preferring to do Ordinal-Encoding on this column. And I am hell-bent to go with One-Hot-Encoding.
My view on this is that doing Ordinal Encoding will allot these colors' some ordered numbers which I'd imply a ranking. And there is no ranking in the first place. In other words, my model should not be thinking of color_white to be 4 and color_orang to be 0 or 1 or 2.
Keep in mind that there is no hint of any ranking or order in the Data Description as well.
I have the following understanding of this topic:
Numbers that neither have a direction nor magnitude are Nominal Variables. For example, fruit_list =['apple', 'orange', banana']. Unless there is a specific context, this set would be called to be a nominal one. And for such variables, we should perform either get_dummies or one-hot-encoding
Whereas the Ordinal Variables have a direction. For example, shirt_sizes_list = [large, medium, small]. These variables are called Ordinal Variables. If the same fruit list has a context behind it, like price or nutritional value i-e, that could give the fruits in the fruit_list some ranking or order, we'd call it an Ordinal Variable. And for Ordinal Variables, we perform Ordinal-Encoding
Is my understanding correct?
Kindly provide your feedback
This topic has turned into a nightmare
Thank you!
You're right. Just one thing to consider for choosing OrdinalEncoder or OneHotEncoder is that does the order of data matter?
Most ML algorithms will assume that two nearby values are more similar than two distant values. This may be fine in some cases e.g., for ordered categories such as:
quality = ["bad", "average", "good", "excellent"] or
shirt_size = ["large", "medium", "small"]
but it is obviously not the case for the:
color = ["white","orange","black","green"]
column (except for the cases you need to consider a spectrum, say from white to black. Note that in this case, white category should be encoded as 0 and black should be encoded as the highest number in your categories), or if you have some cases for example, say, categories 0 and 4 may be more similar than categories 0 and 1. To fix this issue, a common solution is to create one binary attribute per category (One-Hot encoding)
I am doing a college project where I need to compare a string with list of other strings. I want to know if we have any kind of library which can do this or not.
Suppose I have a table called : DOCTORS_DETAILS
Other Table names are : HOSPITAL_DEPARTMENTS , DOCTOR_APPOINTMENTS, PATIENT_DETAILS,PAYMENTS etc.
Now I want to calculate which one among those are more relevant to DOCTOR_DETAILS ?
Expected output can be,
DOCTOR_APPOINTMENTS - More relevant because of the term doctor matches in both string
PATIENT_DETAILS - The term DETAILS present in both string
HOSPITAL_DEPARTMENTS - least relevant
PAYMENTS - least relevant
Therefore I want to find RELEVENCE based on number of similar terms present on both the strings in question.
Ex : DOCTOR_DETAILS -> DOCTOR_APPOITMENT(1/2) > DOCTOR_ADDRESS_INFORMATION(1/3) > DOCTOR_SPECILIZATION_DEGREE_INFORMATION (1/4) > PATIENT_INFO (0/2)
Semantic similarity is a common NLP problem. There are multiple approaches to look into, but at their core they all are going to boil down to:
Turn each piece of text into a vector
Measure distance between vectors, and call closer vectors more similar
Three possible ways to do step 1 are:
tf-idf
fasttext
bert-as-service
To do step 2, you almost certainly want to use cosine distance. It is pretty straightforward with Python, here is a implementation from a blog post:
import numpy as np
def cos_sim(a, b):
"""Takes 2 vectors a, b and returns the cosine similarity according
to the definition of the dot product
"""
dot_product = np.dot(a, b)
norm_a = np.linalg.norm(a)
norm_b = np.linalg.norm(b)
return dot_product / (norm_a * norm_b)
For your particular use case, my instincts say to use fasttext. So, the official site shows how to download some pretrained word vectors, but you will want to download a pretrained model (see this GH issue, use https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M-subword.bin.zip),
Then you'd then want to do something like:
import fasttext
model = fasttext.load_model("model_filename.bin")
def order_tables_by_name_similarity(main_table, candidate_tables):
'''Note: we use a fasttext model, not just pretrained vectors, so we get subword information
you can modify this to also output the distances if you need them
'''
main_v = model[main_table]
similarity_to_main = lambda w: cos_sim(main_v, model[w])
return sorted(candidate_tables, key=similarity_to_main, reverse=True)
order_tables_by_name_similarity("DOCTORS_DETAILS", ["HOSPITAL_DEPARTMENTS", "DOCTOR_APPOINTMENTS", "PATIENT_DETAILS", "PAYMENTS"])
# outputs: ['PATIENT_DETAILS', 'DOCTOR_APPOINTMENTS', 'HOSPITAL_DEPARTMENTS', 'PAYMENTS']
If you need to put this in production, the giant model size (6.7GB) might be an issue. At that point, you'd want to build your own model, and constrain the model size. You can probably get roughly the same accuracy out of a 6MB model!
I have a ML.net project and as of right now everything has gone great. I have a motor that collects a power reading 256 times around each rotation and I push that into a model. Right now it determines the state of the motor nearly perfectly. The motor itself only has room for 38 values on it at a time so I have been spending several rotations to collect the full 256 samples for my training data.
I would like to cut the sample size down to 38 so every rotation I can determine its state. If I just evenly space the samples down to 38 my model degrades by a lot. I know I am not feeding the model the features it thinks are most important but just making a guess and randomly selecting data for the model.
Is there a way I can see the importance of each value in the array during the training process? I was thinking I could use IDataView for this and I found the below statement about it (link).
Standard ML schema: The IDataView system does not define, nor prescribe, standard ML schema representation. For example, it does not dictate representation of nor distinction between different semantic interpretations of columns, such as label, feature, score, weight, etc. However, the column metadata support, together with conventions, may be used to represent such interpretations.
Does this mean I can print out such things as weight for each column and how would I do that?
I have actually only been working with ML.net for a couple weeks now so I apologize if the question is naive, I assure you I have googled this as many ways as I can think to. Any advice would be appreciated. Thanks in advance.
EDIT:
Thank you for the answer I was going down a completely useless path. I have been trying to get it to work following the example you linked to. I have 260 columns with numbers and one column with the conditions as one of five text strings. This is the condition I am trying to predict.
The first time I tried it threw an error "expecting single but got string". No problem I used .Append(mlContext.Transforms.Conversion.MapValueToKey("Label", "Label")) to convert to key values and it threw the error expected Single, got Key UInt32. any ideas on how to push that into this function?
At any rate thank you for the reply but I guess my upvotes don't count yet sorry. hopefully I can upvote it later or someone else here can upvote it. Below is the code example.
//Create MLContext
MLContext mlContext = new MLContext();
//Load Data
IDataView data = mlContext.Data.LoadFromTextFile<ModelInput>(TRAIN_DATA_FILEPATH, separatorChar: ',', hasHeader: true);
// 1. Get the column name of input features.
string[] featureColumnNames =
data.Schema
.Select(column => column.Name)
.Where(columnName => columnName != "Label").ToArray();
// 2. Define estimator with data pre-processing steps
IEstimator<ITransformer> dataPrepEstimator =
mlContext.Transforms.Concatenate("Features", featureColumnNames)
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.Transforms.Conversion.MapValueToKey("Label", "Label"));
// 3. Create transformer using the data pre-processing estimator
ITransformer dataPrepTransformer = dataPrepEstimator.Fit(data);//error here
// 4. Pre-process the training data
IDataView preprocessedTrainData = dataPrepTransformer.Transform(data);
// 5. Define Stochastic Dual Coordinate Ascent machine learning estimator
var sdcaEstimator = mlContext.Regression.Trainers.Sdca();
// 6. Train machine learning model
var sdcaModel = sdcaEstimator.Fit(preprocessedTrainData);
ImmutableArray<RegressionMetricsStatistics> permutationFeatureImportance =
mlContext
.Regression
.PermutationFeatureImportance(sdcaModel, preprocessedTrainData, permutationCount: 3);
// Order features by importance
var featureImportanceMetrics =
permutationFeatureImportance
.Select((metric, index) => new { index, metric.RSquared })
.OrderByDescending(myFeatures => Math.Abs(myFeatures.RSquared.Mean));
Console.WriteLine("Feature\tPFI");
foreach (var feature in featureImportanceMetrics)
{
Console.WriteLine($"{featureColumnNames[feature.index],-20}|\t{feature.RSquared.Mean:F6}");
}
I believe what you are looking for is called Permutation Feature Importance. This will tell you which features are most important by changing each feature in isolation, and then measuring how much that change affected the model's performance metrics. You can use this to see which features are the most important to the model.
Interpret model predictions using Permutation Feature Importance is the doc that describes how to use this API in ML.NET.
You can also use an open-source set of packages, they are much more sophisticated than what is found in ML.NET. I have an example on my GitHub how-to use R with advanced explainer packages to explain ML.NET models. You can get local instance as well as global model breakdown/details/diagnostics/feature interactions etc.
https://github.com/bartczernicki/BaseballHOFPredictionWithMlrAndDALEX
Recently, I am implementing an algorithm from a paper that I will be using in my master's work, but I've come across some problems regarding the time it is taking to perform some operations.
Before I get into details, I just want to add that my data set comprehends roughly 4kk entries of data points.
I have two lists of tuples that I've get from a framework (annoy) that calculates cosine similarity between a vector and every other vector in the dataset. The final format is like this:
[(name1, cosine), (name2, cosine), ...]
Because of the algorithm, I have two of that lists with the same names (first value of the tuple) in it, but two different cosine similarities. What I have to do is to sum the cosines from both lists, and then order the array and get the top-N highest cosine values.
My issue is: is taking too long. My actual code for this implementation is as following:
def topN(self, user, session):
upref = self.m2vTN.get_user_preference(user)
spref = self.sm2vTN.get_user_preference(session)
# list of tuples 1
most_su = self.indexer.most_similar(upref, len(self.m2v.wv.vocab))
# list of tuples 2
most_ss = self.indexer.most_similar(spref, len(self.m2v.wv.vocab))
# concat both lists and add into a dict
d = defaultdict(int)
for l, v in (most_ss + most_su):
d[l] += v
# convert the dict into a list, and then sort it
_list = list(d.items())
_list.sort(key=lambda x: x[1], reverse=True)
return [x[0] for x in _list[:self.N]]
How do I make this code faster? I've tried using threads but I'm not sure if it will make it faster. Getting the lists is not the problem here, but the concatenation and sorting is.
Thanks! English is not my native language, so sorry for any misspelling.
What do you mean by "too long"? How large are the two lists? Is there a chance your model, and interim results, are larger than RAM and thus forcing virtual-memory paging (which would create frustrating slowness)?
If you are in fact getting the cosine-similarity with all vectors in the model, the annoy-indexer isn't helping any. (Its purpose is to get a small subset of nearest-neighbors much faster, at the expense of perfect accuracy. But if you're calculating the similarity to every candidate, there's no speedup or advantage to using ANNOY.
Further, if you're going to combine all of the distances from two such calculation, there's no need for the sorting that most_similar() usually does - it just makes combining the values more complex later. For the gensim vector-models, you can supply a False-ish topn value to just get the unsorted distances to all model vectors, in order. Then you'd have two large arrays of the distances, in the model's same native order, which are easy to add together elementwise. For example:
udists = self.m2v.most_similar(positive=[upref], topn=False)
sdists = self.m2v.most_similar(positive=[spref], topn=False)
combined_dists = udists + sdists
The combined_dists aren't labeled, but will be in the same order as self.m2v.index2entity. You could then sort them, in a manner similar to what the most_similar() method itself does, to find the ranked closest. See for example the gensim source code for that part of most_similar():
https://github.com/RaRe-Technologies/gensim/blob/9819ce828b9ed7952f5d96cbb12fd06bbf5de3a3/gensim/models/keyedvectors.py#L557
Finally, you might not need to be doing this calculation yourself at all. You can provide more-than-one vector to most_similar() as the positive target, and then it will return the vectors closest to the average of both vectors. For example:
sims = self.m2v.most_similar(positive=[upref, spref], topn=len(self.m2v))
This won't be the same value/ranking as your other sum, but may behave very similarly. (If you wanted less-than-all of the similarities, then it might make sense to use the ANNOY indexer this way, as well.)
I'm trying to create a model with a training dataset and want to label the records in a test data set.
All tutorials or help I find online has information on only using cross validation with one data set, i.e., training dataset. I couldn't find how to use test data. I tried to apply the result model on to the test set. But the test set seems to give different no. of attributes than training set after pre-processing. This is a text classification problem.
At the end I get some output like this
18.03.2013 01:47:00 Results of ResultWriter 'Write as Text (2)' [1]:
18.03.2013 01:47:00 SimpleExampleSet:
5275 examples,
366 regular attributes,
special attributes = {
confidence_1 = #367: confidence(1) (real/single_value)
confidence_5 = #368: confidence(5) (real/single_value)
confidence_2 = #369: confidence(2) (real/single_value)
confidence_4 = #370: confidence(4) (real/single_value)
prediction = #366: prediction(label) (nominal/single_value)/values=[1, 5, 2, 4]
}
But what I wanted is all my examples to be labelled.
It seems that my test data and training data have different no. of attributes, I see many of following in the logs.
Mar 18, 2013 1:46:41 AM WARNING: Kernel Model: The given example set does not contain a regular attribute with name 'wireless'. This might cause problems for some models depending on this particular attribute.
But how do we solve such problem in text classification as we cannot know no. of and name of attributes before hand.
Can some one please throw some pointers.
You probably use a Process Documents operator to preprocess both training and test set. Here it is important that both these operators are setup identically. To "synchronize" the wordlist, i.e. consider the same set of words in both of them, you have to connect the wordlist (wor) output of the Process Documents operator used for training to the corresponding input port of the Process Documents operator used for preprocessing the test set.