Multuicollinearity test in multinomial regression with categorical independent variables in R - r-package

I am running a multinomial regression with categorical independent variables. To check the multicollinearity, I used the VIF function from the car package, but it shows the following warning:
In vif.default(model1) : No intercept: vifs may not be sensible.
Also, the result seems quite implausible.
Can you please recommend functions/packages in R that can calculate VIF in multinomial regression with categorical independent variables?

Related

Use categorical data as feature/target without encoding it

I am recently found a model to classify the Irish flower based on the size of its leaf. There are 3 types of flowers as a target (dependent variable). As I know, the categorical data should be encoded so that it can be used in machine learning. However, in the model the data is used directly without encoding process.
Can anyone help to explain when to use encoding? Thank you in advance!
Relevant question - encoding of continuous feature variables.
Originally, the Iris data were published by Fisher when he published his linear discriminant classifier.
Generally, a distinction is made between:
Real-value classifiers
Discrete feature classifiers
Linear discriminant analysis and quadratic discriminant analysis are real-value classifiers. Trying to add discrete variables as extra input does not work. Special procedures for working with indicator variables (the name used in statistics) in discriminant analysis have been developed. Also the k-nearest neighbour classifier really only works well with real-valued feature variables.
The naive Bayes classifier is most commonly used for classification problems with discrete features. When you don't want to assume conditional independence between the feature variables, the multinomial classifier can be applied to discrete features. A classifier service that does all this for you in one go, is insight classifiers.
Neural networks and support vector machines combine real-valued and discrete features. My advice is to use one separate input node for each discrete outcome - don't use one single input node provided with values like: (0: small, 1: minor, 2: medium, 3: larger, 4: big). One input-node-per-outcome-encoding will improve your training result and yield better test set performance.
The random forest classifier also combines real-valued and discrete features seamlessly.
Final advice is to train and test-set compare at least 4 different types of classifiers, as there is no such thing as the universal best type of classifier.

Assumptions of Naive Bayes and Logistic Regression

Am trying to understand the difference in assumptions required for Naive Bayes and Logistic Regression.
As per my knowledge both Naive Bayes and Logistic Regression should have features independent to each other ie predictors should not have any multi co-linearity.
and only in Logistic Regression should follow linearity of independent variables and log-odds.
Correct me if am wrong and is there any other assumptions/differences between Naive and logistic regression
You are right durga. Them two have similar performances as well.
A difference is that NB assumes normal distributions, whereas logistic regression does not. As for speed, NB is much faster.
Logistic regression, according to this source:
1) Requires observations to be independent of each other.  In other words, the observations should not come from repeated measurements or matched data.
2) Requires the dependent variable to be binary and ordinal logistic regression requires the dependent variable to be ordinal.
3) Requires little or no multicollinearity among the independent variables.  This means that the independent variables should not be too highly correlated with each other.
4) Assumes linearity of independent variables and log odds.
5) Typically requires a large sample size.  A general guideline is that you need at minimum of 10 cases with the least frequent outcome for each independent variable in your model.
tl;dr:
Naive Bayes requires conditional independence of the variables. Regression family needs the feature to be not highly correlated to have a interpretable/well fit model.
Naive Bayes require the features to meet the "conditional independence" requirement which means:
This is very much different than the "regression family" requirements. What they need is that variables are not "correlated". Even if the features are correlated, the regression model might only become overfit or might become harder to interpret. So if you use a proper regularization, you would still get a good prediction.

Ordinal logistic regression how it differs from logistic regression?

I am sure this question may not be in the brilliant category. But Somehow to learn machine learning i may start with stupid question. So, please.
I understood the terms of regressions partially.
The regression essentially give the idea of the relationship between the dependent and independent variables.
If the dependent variable is continuous and if you see the linear relation between dependent and independent, then linear regression is a way to go.
A slight change now. If the dependent value could be something like Binary value (Y/N), ie: the output value is binomial distribution, then logistic regression is a way to go that which demands non linear relationship between dependent and independent.
So far..Please correct me if i am wrong.
Now my question is with respect to ordinal logistic regression.
I have started looking at the below link for reference
https://statistics.laerd.com/spss-tutorials/ordinal-regression-using-spss-statistics.php
Where it is mentioned that " It can be considered as either a generalisation of multiple linear regression or as a generalisation of binomial logistic regression".
Could someone help me understand this above statement with examples?
Logistic regression can be considered as an extension of linear regression. But instead of predicting continuous variables, it predicts discrete variables by introducing the computation of an activation function. So, you are asked to produce a discriminatory function that based on X you produce a function that outputs f: [1,2, ..., k] where k is the number of classes that your problem presents. Now X can be composed of features that are both continuous or discrete. It does not matter, just make sure you apply pre-processing to them.
The base case for logistic regression is finding the decision boundary that divides two classes. But in order to add more classes, you have to implement another approach. There are several: softmax (https://en.wikipedia.org/wiki/Softmax_function), one-vs-all (https://en.wikipedia.org/wiki/Multiclass_classification), etc.
Finally, answering your question about ordinal logistic regression is an extension of logistic regression. But considers the order of the output variables such as in the case of a test. Take a look online for examples.

How does auto regression work with correlated independent variables?

I know that auto-regression is a regression with lag variables. But, we know that in linear regression, we cannot use correlated independent variables.
Then how does auto-regression work? And what is the difference between that and linear regression?

Multinomial logistic regression steps in SPSS

I have data suited to multinomial logistic regression but I don't know how to formulate the model in predicting my Y.
How do I perform Multinomial Logistic Regression using SPSS?
How does stepwise method work?
There are plenty of examples of annotated output for SPSS multinomial logistic regression:
UCLA example
My own list of links and resources
Stepwise method provides a data driven approach to selection of your predictor variables. In general the decision to use data-driven or direct entry or hierarchical approaches is related to whether you want to test theory (i.e., direct entry or hierarchical) or you want to simply optimise prediction (i.e., stepwise and related methods).

Resources