Three Way Interaction in mgcv - interaction

I am interested in implementing a three-way interaction in mgcv but while there has been some discussion both here and on Cross Validated, I have had trouble finding an answer to how exactly one should code a three-way interaction between two continuous variables and one categorical variable. In my study, I have only four variables (socioeconomic class (Socio), sex, year of death (YOD), and age) for each individual and I am curious how these variables explain the likelihood of someone being buried with burial goods (N=c.12,000).
I have read Pedersen et al. 2019 and have elected to not include global smooths in my model. However, I am not certain if this means I should also not include the lower order interaction terms in my three-way interaction model. For example, should my code be:
mgcv::gam(Goods ~ Socio + te(Age,YOD,by=Socio,k=5), family=binomial(link='logit'),
mydata, method='ML')
should I still include the lower order terms within the three-way interaction:
mgcv::gam(Goods ~ Socio + s(Age,by=Socio,k=5) + s(YOD,by=Socio,k=5) + te(Age,YOD,k=5) +
ti(Age,YOD,by=Socio, k=5), family=binomial(link='logit'),mydata,method='ML')
or is there a different means of coding this?

Unless you want to perform a test on the interaction, then you are better off not decomposing the te() into main and interaction effects. This is because the models
y ~ te(x1, x2) # main + interaction in one smooth model
y ~ s(x1) + s(x2) + ti(x1, x2) # decomposed model
aren't exactly equivalent:
you have to be very picky about setting k on all the terms such that these models are using the same number of basis functions, and
the decomposed model uses more smoothness parameters than the te() version, so is slightly more complex a model even if you get all the bases are roughly of comparable size
Note you are including a global smooth in the second model: this term te(Age,YOD,k=5) will be an global smooth of Age and YOD over the whole data set.
The decomposed version of your first model would be:
Goods ~ Socio +
s(Age, by = Socio) +
s(YOD, by = Socio) +
ti(Age, YOD, by = Socio)
Setting things up to test if you need the factor by terms or not would require more work but I think you'd be better off do that post-hoc on the te() model, where you can compare the fitted surfaces by differencing them.

Related

gretl - dummy interactions

There does not seem to be an "easy" way (such as in R or python) to create interaction terms between dummy variables in gretl ?
Do we really need to code those manually which will be difficult for many levels? Here is a minimal example of manual coding:
open credscore.gdt
SelfemplOwnRent=OwnRent*Selfempl
# model 1
ols Acc 0 OwnRent Selfempl SelfemplOwnRent
Now my manual interaction term will not work for factors with many levels and in fact does not even do the job for binary variables.
Thanks,
ML
One way of doing this is to use lists. Use the dummify-command for generating dummies for each level and the ^-operator for creating the interactions. Example:
open griliches.gdt
discrete med
list X = dummify(med)
list D = dummify(mrt)
list INT = X^D
ols lw 0 X D INT
The command discrete turns your variable into a discrete variable and allows to use dummify (this step is not necessary if your variable is already discrete). Now all interactions terms are stored in the list INT and you can easily assess them in the following ols-command.
#Markus Loecher on your second question:
You can always use the rename command to rename a series. So you would have to loop over all elements in list INT to do so. However, I would rather suggest to rename both input series, in the above example mrt and med respectively, before computing the interaction terms if you want shorter series names.

How to find a function that fits a given data set?

The search algorithm is a Breadth first search. I'm not sure how to store terms from and equation into a open list. The function f(x) has the form of ax^e1 + bx^e2 + cx^e3 + k, where a, b, c, are coefficients; k is constant. All exponents, coefficients, and constants are integers between 0 and 5.
Initial state: of the problem solving process should be any term from the ax^e1, bx^e2, cX^e3, k.
The algorithm gradually expands the number of terms in each level of the list.
Not sure how to add the terms to an equation from an open Queue. That is the question.
The general problem that you are dealing belongs to the regression analysis area, and several techniques are available to find a function that fits a given data set, including the popular least squares methods for finding the line of best fit given a dataset (a brief starting point is the related page on wikipedia, but if you want to deepen this topic, you should look at the research paper out there).
If you want to stick with the breadth first search algorithm, although this kind of approach is not common for such a problem, first of all, you need to define all the elements for a search problem, namely (see for more information Chapter 3 of the book of Stuart and Russell, Artificial Intelligence: A Modern Approach):
Initial state: Some initial values for the different terms.
Actions: in your case it should be a change in the different terms. Note that you should discretize the changes in the values.
Transition function: function that determines the new states given a state and an action.
Goal test: a check to recognize whether a state is a goal state or not, and so to terminate the search. There are different ways to define this test in a regression problem. One way is to set a threshold for the sum of the square errors.
Step cost: The cost for an action. In such an abstract problem, probably you can consider the unweighted distance from the initial state on the search graph.
Note that you should carefully think about these elements, as, for example, they determine how efficient your search would be or whether you will have cycles in the search graph.
After you defined all of the elements for the search problem, you basically have to implement:
Node, that contains information about the parent, the state, and the current cost;
Function to expand a given node that returns the successor nodes (according to the transition function, the actions, and the step cost);
Goal test;
The actual search algorithm. In the queue at the beginning you will have the node containing the initial state. After, it is updated with the successor nodes.

SARSA Implementation

I am learning about SARSA algorithm implementation and had a question. I understand that the general "learning" step takes the form of:
Robot (r) is in state s. There are four actions available:
North (n), East (e), West (w) and South (s)
such that the list of Actions,
a = {n,w,e,s}
The robot randomly picks an action, and updates as follows:
Q(a,s) = Q(a,s) + L[r + DQ(a',s1) - Q(a,s)]
Where L is the learning rate, r is the reward associated to (a,s), Q(s',a') is the expected reward from an action a' in the new state s' and D is the discount factor.
Firstly, I don't undersand the role of the term - Q(a,s), why are we re-subtracting the current Q-value?
Secondly, when picking actions a and a' why do these have to be random? I know in some implementations or SARSA all possible Q(s', a') are taken into account and the highest value is picked. (I believe this is Epsilon-Greedy?) Why not to this also to pick which Q(a,s) value to update? Or why not update all Q(a,s) for the current s?
Finally, why is SARSA limited to one-step lookahead? Why, say, not also look into an hypothetical Q(s'',a'')?
I guess overall my questions boil down to what makes SARSA better than another breath-first or depth-first search algorithm?
Why do we subtract Q(a,s)? r + DQ(a',s1) is the reward that we got on this run through from getting to state s by taking action a. In theory, this is the value that Q(a,s) should be set to. However, we won't always take the same action after getting to state s from action a, and the rewards associated with going to future states will change in the future. So we can't just set Q(a,s) equal to r + DQ(a',s1). Instead, we just want to push it in the right direction so that it will eventually converge on the right value. So we look at the error in prediction, which requires subtracting Q(a,s) from r + DQ(a',s1). This is the amount that we would need to change Q(a,s) by in order to make it perfectly match the reward that we just observed. Since we don't want to do that all at once (we don't know if this is always going to be the best option), we multiply this error term by the learning rate, l, and add this value to Q(a,s) for a more gradual convergence on the correct value.`
Why do we pick actions randomly? The reason to not always pick the next state or action in a deterministic way is basically that our guess about which state is best might be wrong. When we first start running SARSA, we have a table full of 0s. We put non-zero values into the table by exploring those areas of state space and finding that there are rewards associated with them. As a result, something not terrible that we have explored will look like a better option than something that we haven't explored. Maybe it is. But maybe the thing that we haven't explored yet is actually way better than we've already seen. This is called the exploration vs exploitation problem - if we just keep doing things that we know work, we may never find the best solution. Choosing next steps randomly ensures that we see more of our options.
Why can't we just take all possible actions from a given state? This will force us to basically look at the entire learning table on every iteration. If we're using something like SARSA to solve the problem, the table is probably too big to do this for in a reasonable amount of time.
Why can SARSA only do one-step look-ahead? Good question. The idea behind SARSA is that it's propagating expected rewards backwards through the table. The discount factor, D, ensures that in the final solution you'll have a trail of gradually increasing expected rewards leading to the best reward. If you filled in the table at random, this wouldn't always be true. This doesn't necessarily break the algorithm, but I suspect it leads to inefficiencies.
Why is SARSA better than search? Again, this comes down to an efficiency thing. The fundamental reason that anyone uses learning algorithms rather than search algorithms is that search algorithms are too slow once you have too many options for states and actions. In order to know the best action to take from any other state action pair (which is what SARSA calculates), you would need to do a search of the entire graph from every node. This would take O(s*(s+a)) time. If you're trying to solve real-world problems, that's generally too long.

Normalize company names using a long set of rules

We have a large table (>30M rows) containing company names and other characteristics.
Data:
Company_id Type Name Adress (more...)
497651684 8 Big mall Toys'rUs BigMall adress
468468486 1 McDonnnals WhateverStreet
161684314 8 Toys R Us Another street
546846846 1 BgKing BigMall2 adress
484984988 5 IKEA store103 Other Mall
489616848 5 Mss Duty Addrs
484984984 5 Pull&Bear Adrss
468484867 5 Zara store Adress2
(...)
From that table, we have identified about ~300 company groups whose name could be normalized easily with something on lines of:
if type is (8,10,85,2)
and
(
contains name ("toys","us")
or
stringDistance name("toys R us") < (X)
)
new name is "Toys 'R us"
if type is (1,90,7)
and
(contains name("donalds")
or
stringDistance name("mcdonalds") < (X)
)
new name is "Mc donalds"
(...)
I'm not sure what would be the best approach for this honestly. We previously did an ad-hoc approach for a way smaller set with a simpler logic for a fast solution. But this time I would love to know what would be the ideal way.
While String edit distance e.g. stringDistance name("toys R us") < (X) is a good approach, I will also recommend trying to use clustering especially hierarchical clustering here.
All the names falling into the same cluster should have the same standard company name. For the above to work you will have to cut the dendogram (http://en.wikipedia.org/wiki/Dendrogram) of the hierarchy pretty close to the leaves. You will have to try different features (the ones used in calculating the distance or similarity) of your clustering to arrive at a suitable set. Examples can be representing each company name as a vector of characters and then using cosine similarity to measure distances. Btw, cosine similarity works great for texts.

Naive Bayes training set optimization

I am working on a naive bayes classifier that takes a bunch of user profile data such as:
Name
City
State
School
Email Address
URLS { ... }
The last bit is a bunch of urls that are search results for the user gathered by a google search for the user by name. The objective is to decide if the search result is accurate(ie. it is about the person) or inaccurate. In order to do this, each piece of the profile data is searched within each link in the url array and a binary value is assigned per attribute if that profile data (ex. City) is matched on a page. The results are then represented as a vector of binaries (ie. 1 0 0 0 1 means Name and Email address was matched on the url).
My questions revolves around creating the optimal training set. If a person's profile has incomplete information (such as missing email adddress), should that be a good profile to use in my training set? Should I be only training on profiles with full training information? Would it make sense to make different training sets (one for each combination of complete profile attributes) and then when i am given a user's url to test with, i determine which training set to use based on how much user profile is on record for the test person? How can i go about this?
In general, there is no "should". Whichever way you create a model, the only thing which matters is its performance, no matter how you created it.
However, it is highly unlikely you'd be able to create a proper model with hand-picked training set. The simple idea is that you should train your model on data which looks exactly like live data. Will live data have missing values, incomplete profiles etc? So, you need your model to know what to do in such situations and, therefore, such profiles should be in the training set.
Yes, certainly, you can make a model composed of several sub-models, however you might run into problems with having not enough training data and overfitting. You'll have to create multiple good models for it to work, which is harder. I suppose it would be better to leave this reasoning to the model itself rather than trying to hand-hold it into the right direction, this is what machine learning is for - save you the trouble... But there is really no way to say before trying it on your data set. Again, whatever works in your particular case is right.
Because you're using Naive Bayes as your model (and only because of that) you can benefit from the independence assumption to use every piece of data you have available and only consider those present in the new sample.
You have features f1...fn, some of which may or may not be present in any given entry. The posterior probability p( relevant | f_1 ... f_n ) decomposes as:
p( relevant | f_1 ... f_n ) \propto p( relevant ) * p( f_1 | relevant ) * p( f_2 | relevant ) ... p(f_n | relevant )
p( irrelevant | f_1...f_n ) is similar. If some particular f_i isn't present, just drop the terms from the two posteriors---given that they're defined over the same feature space probabilities are comparable, and can be normalised in the standard way. All you then need is to estimate the terms p( f_i | relevant ): this is simply the fraction of the relevant links where the i_th feature is 1 (possibly smoothed). To estimate this parameter simply use the set of relevant links where the i-th feature is defined.
This is only going to work if you implement yourself, as I don't think you can do this with a standard package, but given how easy it is to implement I wouldn't be concerned.
Edit: an example
Imagine you have the following features and data (they're binary, since you say that's what you have, but the extension to categorical or continuous is not difficult, I hope):
D = [ {email: 1, city: 1, name: 1, RELEVANT: 1},
{city: 1, name: 1, RELEVANT: 0},
{city: 0, email: 0, RELEVANT: 0}
{name: 1, city: 0, email: 1, RELEVANT: 1} ]
where each element of the list is an instance, and the target variable for classification is the special RELEVANT field (note that some of these instances have some variables missing).
You then want to classify the following instance, missing the RELEVANT field since that's what you're hoping to predict:
t = {email: 0, name: 1}
The posterior probability
p(RELEVANT=1 | t) = [p(RELEVANT=1) * p(email=0|RELEVANT=1) * p(name=1|RELEVANT)] / evidence(t)
while
p(RELEVANT=0 | t) = [p(RELEVANT=0) * p(email=0|RELEVANT=0) * p(name=1|RELEVANT=0)] / evidence(t)
where evidence(t) is just a normaliser obtained by summing the two numerators above.
To get each of the parameters of the form p(email=0|RELEVANT=1), look at the fraction of training instances where RELEVANT=1 which have email=0:
p(email=0|RELEVANT=1) = count(email=0,RELEVANT=1) / [count(email=0,RELEVANT=1) + count(email=1,RELEVANT=1)].
Notice that this term simply ignores instances for which email is not defined.
In this instance, the posterior probability of relevance goes to zero because the count(email=0,RELEVANT=1) is zero. So I would suggest using a smoothed estimator where you add one to every count, so that:
p(email=0|RELEVANT=1) = [count(email=0,RELEVANT=1)+1] / [count(email=0,RELEVANT=1) + count(email=1,RELEVANT=1) + 2].

Resources