I have a character feature of weather condition i.e rain, snow...."
I'd like to feed the feature to a random forest, what kind of transformation I can do to turn it into numeric
thanks
You can convert a categorical variable into a number by turning the single attribute into n attributes where n is the number of digits necessary to represent the total number of options in binary.
For example, if I have an attribute [weather] that can take the values of "rain","sun","snow" then you could instead create 2 dummy attributes [weather1] and [weather0]. The reason you can do this with 2 dummy attribute is because 3 can be represented in binary with 2 digits: 11.
Then instead of using "rain" you would represent the category as a binary value across the two dummy attributes:
"rain" is first so it would be 01 in binary so that feature would have a 0 for [weather1] and a 1 for [weather0]. "sun" is second so you would represent it as 10 and "snow" is third so you could represent it as 11. The order isn't important so long as it's consistent across your variables.
If we think of these values as python dictionaries then we can see a more clear example:feature[weather] = "rain"new_feature[weather] = [0,1] ornew_feature[weather0] = 1, new_feature[weather1] = 0
You shouldn't. The weather condition is a categorical variable, which random forest handles natively. Leave it as it is and let the algorithm work as it should.
Related
IF we are not sure about the nature of categorical features like whether they are nominal or ordinal, which encoding should we use? Ordinal-Encoding or One-Hot-Encoding?
Is there a clearly defined rule on this topic?
I see a lot of people using Ordinal-Encoding on Categorical Data that doesn't have a Direction.
Suppose a frequency table:
some_data[some_col].value_counts()
[OUTPUT]
color_white 11413
color_green 4544
color_black 1419
color_orang 3
Name: shirt_colors, dtype: int64
There are a lots of guys who are preferring to do Ordinal-Encoding on this column. And I am hell-bent to go with One-Hot-Encoding.
My view on this is that doing Ordinal Encoding will allot these colors' some ordered numbers which I'd imply a ranking. And there is no ranking in the first place. In other words, my model should not be thinking of color_white to be 4 and color_orang to be 0 or 1 or 2.
Keep in mind that there is no hint of any ranking or order in the Data Description as well.
I have the following understanding of this topic:
Numbers that neither have a direction nor magnitude are Nominal Variables. For example, fruit_list =['apple', 'orange', banana']. Unless there is a specific context, this set would be called to be a nominal one. And for such variables, we should perform either get_dummies or one-hot-encoding
Whereas the Ordinal Variables have a direction. For example, shirt_sizes_list = [large, medium, small]. These variables are called Ordinal Variables. If the same fruit list has a context behind it, like price or nutritional value i-e, that could give the fruits in the fruit_list some ranking or order, we'd call it an Ordinal Variable. And for Ordinal Variables, we perform Ordinal-Encoding
Is my understanding correct?
Kindly provide your feedback
This topic has turned into a nightmare
Thank you!
You're right. Just one thing to consider for choosing OrdinalEncoder or OneHotEncoder is that does the order of data matter?
Most ML algorithms will assume that two nearby values are more similar than two distant values. This may be fine in some cases e.g., for ordered categories such as:
quality = ["bad", "average", "good", "excellent"] or
shirt_size = ["large", "medium", "small"]
but it is obviously not the case for the:
color = ["white","orange","black","green"]
column (except for the cases you need to consider a spectrum, say from white to black. Note that in this case, white category should be encoded as 0 and black should be encoded as the highest number in your categories), or if you have some cases for example, say, categories 0 and 4 may be more similar than categories 0 and 1. To fix this issue, a common solution is to create one binary attribute per category (One-Hot encoding)
Recently, I am implementing an algorithm from a paper that I will be using in my master's work, but I've come across some problems regarding the time it is taking to perform some operations.
Before I get into details, I just want to add that my data set comprehends roughly 4kk entries of data points.
I have two lists of tuples that I've get from a framework (annoy) that calculates cosine similarity between a vector and every other vector in the dataset. The final format is like this:
[(name1, cosine), (name2, cosine), ...]
Because of the algorithm, I have two of that lists with the same names (first value of the tuple) in it, but two different cosine similarities. What I have to do is to sum the cosines from both lists, and then order the array and get the top-N highest cosine values.
My issue is: is taking too long. My actual code for this implementation is as following:
def topN(self, user, session):
upref = self.m2vTN.get_user_preference(user)
spref = self.sm2vTN.get_user_preference(session)
# list of tuples 1
most_su = self.indexer.most_similar(upref, len(self.m2v.wv.vocab))
# list of tuples 2
most_ss = self.indexer.most_similar(spref, len(self.m2v.wv.vocab))
# concat both lists and add into a dict
d = defaultdict(int)
for l, v in (most_ss + most_su):
d[l] += v
# convert the dict into a list, and then sort it
_list = list(d.items())
_list.sort(key=lambda x: x[1], reverse=True)
return [x[0] for x in _list[:self.N]]
How do I make this code faster? I've tried using threads but I'm not sure if it will make it faster. Getting the lists is not the problem here, but the concatenation and sorting is.
Thanks! English is not my native language, so sorry for any misspelling.
What do you mean by "too long"? How large are the two lists? Is there a chance your model, and interim results, are larger than RAM and thus forcing virtual-memory paging (which would create frustrating slowness)?
If you are in fact getting the cosine-similarity with all vectors in the model, the annoy-indexer isn't helping any. (Its purpose is to get a small subset of nearest-neighbors much faster, at the expense of perfect accuracy. But if you're calculating the similarity to every candidate, there's no speedup or advantage to using ANNOY.
Further, if you're going to combine all of the distances from two such calculation, there's no need for the sorting that most_similar() usually does - it just makes combining the values more complex later. For the gensim vector-models, you can supply a False-ish topn value to just get the unsorted distances to all model vectors, in order. Then you'd have two large arrays of the distances, in the model's same native order, which are easy to add together elementwise. For example:
udists = self.m2v.most_similar(positive=[upref], topn=False)
sdists = self.m2v.most_similar(positive=[spref], topn=False)
combined_dists = udists + sdists
The combined_dists aren't labeled, but will be in the same order as self.m2v.index2entity. You could then sort them, in a manner similar to what the most_similar() method itself does, to find the ranked closest. See for example the gensim source code for that part of most_similar():
https://github.com/RaRe-Technologies/gensim/blob/9819ce828b9ed7952f5d96cbb12fd06bbf5de3a3/gensim/models/keyedvectors.py#L557
Finally, you might not need to be doing this calculation yourself at all. You can provide more-than-one vector to most_similar() as the positive target, and then it will return the vectors closest to the average of both vectors. For example:
sims = self.m2v.most_similar(positive=[upref, spref], topn=len(self.m2v))
This won't be the same value/ranking as your other sum, but may behave very similarly. (If you wanted less-than-all of the similarities, then it might make sense to use the ANNOY indexer this way, as well.)
Is it possible to somehow "hash" a given String with length n to a hash value of an arbitrary length m?
I want to achieve something like follows:
let s1 = "<UNIQUE_USER_IDENTIFIER_1>"
let s2 = "<UNIQUE_USER_IDENTIFIER_2>"
let x1 = s1.hashValue(length: 4)
let x2 = s2.hashValue(length: 4)
I want to assign each given user a (e.g. four-digit) number, that is based on its unique UID. Is that possible?
First, I want to be clear that you mean "hash" and don't mean "(lossless) compress." You should expect some collisions where x1 and x2 are the same value for different s1 and s2. If you really mean a mapping so that there are no collisions, then we have to know a lot more about the problem. It is impossible to achieve that in the general case (see the Pigeonhole principle). But it can be achieved in some special cases where there is sufficient redundancy in the input. Or it can be done by maintaining a table (i.e. a database or the like). The rest of this answer is about hashing.
If your UID is a UUID created on iOS (or any v4 UUID), then its bits are already quite high quality, and the last four digits should be fine without doing any hashing at all. There are a couple of bytes in the middle that you should avoid, but the whole end section is random and so an ideal hash.
If your UUID is not random, you can try using the default hashes and pulling the required number of bits out of them, but non-cryptographic hashes don't always have good independence between their bits, so this may collide more than you like.
In that case use a cryptographic hash larger than the size you need and truncate it (or take the least-significant bits; either set are fine). This is commonly done in cryptography. For example SHA-512/256 is a commonly used hash that computes a 512-bit hash and extracts 256 bits from it. Cryptographic hashes require high independence of all their bits, so any subset of bits will also be collision resistant.
BTW, if you mean "4 decimal digits," then you should expect a collision about 1 time out 100. If you mean 16 bits (4 hex digits), you should expect a collision about one time in 300. These are your best-case scenarios and mean your hash is working well. See Birthday Attack for a table of expectations and some helpful approximations.
Based on only the information you provided:
extension String {
func hashValue(length: Int) -> Int? {
return Int(String(abs(hash)).prefix(length))
}
}
Usage:
"foo".hashValue(length: 4) // 5192
This will give you a consistent positive integer result based on the string input. Obviously it is not very useful for uuid purposes but useful for other use-cases nonetheless.
I'm using the Levenshtein distance algorithm to filter through some text in order to determine the best matching result for the purpose of text field auto-completion (and top 5 best results).
Currently, I have an array of strings, and apply the algorithm to each one in an attempt to determine how close of a match it is to the text which was typed by the user. The problem is that I'm not too sure how to interpret the values outputted by the algorithm to effectively rank the results as expected.
For example: (Text typed = "nvmb")
Result: "game" ; levenshtein distance = 3 (best match)
Result: "number the stars" ; levenshtein distance = 13 (second best match)
This technically makes sense; the second result needs many more 'edits', because of it's length. The problem is that the second result is logically and visually a much closer match than the first one. It's almost as if I should ignore any characters longer than the length of the typed text.
Any ideas on how I could achieve this?
Levenshtein distance itself is good for correcting query, not for auto-completion.
I can propose alternative solution:
First, store your strings in prefix tree instead of array, so you will have no need to analyze all of them.
Second, given user input enumerate strings with fixed distance from it and suggest completions for any.
Your example: Text typed = "nvmb"
Distance is 0, no completions
Enumerate strings with distance 1
Only "numb" will have some completions
Another example:Text typed="gamb"
For distance 0 you have only one completion, "gambling", make it first suggestion, and continue to get 4 more
For distance 1 you will get "game" and some completions for it
Of course, this approach sometimes gives more than 5 results, but you can order them by another criterion, not depending on current query.
I think it is more efficient because typically you can limit distance with at maximum two, i.e. check order of 1000*n prefixes, where n is length of input, most times less than number of stored strings.
The Levenshtein distance corresponds to the number of single-character insertions, deletions and substitutions in an optimal global pairwise alignment of two sequences if the gap and mismatch costs are all 1.
The Needleman-Wunsch DP algorithm will find such an alignment, in addition to its score (it's essentially the same DP algorithm as the one used to calculate the Levenshtein distance, but with the option to weight gaps, and mismatches between any given pair of characters, arbitrarily). But there are more general models of alignment that allow reduced penalties for gaps at the start or the end (and reduced penalties for contiguous blocks of gaps, which may also be useful here, although it doesn't directly answer the question). At one extreme, you have local alignment, which is where you pay no penalty at all for gaps at the ends -- this is computed by the Smith-Waterman DP algorithm. I think what you want here is in-between: You want to penalise gaps at the start of both the query and test strings, and gaps in the test string at the end, but not gaps in the query string at the end. That way, trailing mismatches cost nothing, and the costs will look like:
Query: nvmb
Costs: 0100000000000000 = 1 in total
Against: number the stars
Query: nvmb
Costs: 1101 = 3 in total
Against: game
Query: number the stars
Costs: 0100111111111111 = 13 in total
Against: nvmb
Query: ber star
Costs: 1110001111100000 = 8 in total
Against: number the stars
Query: some numbor
Costs: 111110000100000000000 = 6 in total
Against: number the stars
(In fact you might want to give trailing mismatches a small nonzero penalty, so that an exact match is always preferred to a prefix-only match.)
The Algorithm
Suppose the query string A has length n, and the string B that you are testing against has length m. Let d[i][j] be the DP table value at (i, j) -- that is, the cost of an optimal alignment of the length-i prefix of A with the length-j prefix of B. If you go with a zero penalty for trailing mismatches, you only need to modify the NW algorithm in a very simple way: instead of calculating and returning the DP table value d[n][m], you just need to calculate the table as before, and find the minimum of any d[n][j], for 0 <= j <= m. This corresponds to the best match of the query string against any prefix of the test string.
I have the following code:
let rand = System.Random()
let gold = [ for i in years do yield rand.NextDouble()]
However I cannot collapse it into one line as
let gold = [ for i in years do yield System.Random.NextDouble()]
Why?
Your two code examples are not equivalent. The first one creates an object, and then repeatedly calls NextDouble() on that object. The second one appears to call a static method on the Random class, but I'd be surprised if it even compiles, since NextDouble() is not actually declared as static.
You can combine the creation of the Random instance and its usage in a couple of ways, if desired:
let gold =
let rand = Random()
[for i in 1..10 do yield rand.NextDouble()]
or
let gold = let rand = Random() in [for i in 1..10 do yield rand.NextDouble()]
Most random numbers generated by computers, such as in the case of your code, are not random in the true sense of the word. They are generated by an algorithm, and given the algorithm and seed (like a starting point for the algorithm), deterministic.
Essentially, when you want a series of random numbers, you select a seed and an algorithm, and then that algorithm starts generating random numbers for you using the seed as the starting point and iterating he algorithm from there.
In the old days, people would produce books of "random numbers". These books used a seed and algorithm to produce the random series of numbers ahead of time. If you wanted a random number, then you would select one from the book.
Computers work similarly. When you call
Let rand = System.Random()
You are initializing the random number generator. It is like you are creating a book full of random numbers. Then to iteratively draw random numbers from the series, you do
rand.NextDouble()
That is like picking the first number from the series (book). Call it again and you pick the second number from the series, etc.
What is the point of F#/.NET having you initialize the random number generator? Well, what if you wanted repeatable results where the random series would contain the same numbers every time you ran the code? Well, doing it this way allows you to set the seed so you are guaranteed to have the same "book of random numbers" each time:
let rand = System.Random(1)
Or, what if you wanted to different series of random numbers?
let rand1 = System.Random(1)
let rand2 = System.Random(2)