This is w.r.t a hybrid of ANN and logistic regression in a binary classification problem. For example in one of the papers I came across they state that "A hybrid model type is constructed by using the logistic regression model to calculate the probability of failure and then adding that value as an additional input variable into the ANN. This type of model is defined as a Plogit-ANN model".
So, for n input variables, I'm trying to understand how the additional input n+1 to a ANN is treated by the activation function (eg. a logit function) and in the summation of weights multiplied by inputs. Do we treat this probability variable n+1 as one of the standalone weights like a special type of b0 that we add in the sum of weights multiplied by inputs e.g. Summation for each Neuron = (Sum (Wi*Xi))+additional variable.
Thank you for your assistance.
According to the description provided the easiest way is to treat this as additional feature of your data. So you have a model that predicts something about your original dataset (probability of some additional thing), thus you get x -> f(x). You simply concatenate it to your feature vector so x' = [x1 x2 ... xk f(x)], and push it through the network.
However the described approach is quite naive, since you are doing these two things (training f and training neural net) completely independently), what might be more beneficial is to instead treat fitting f as an auxiliary loss and train your model jointly.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
When I trained my neural network with Theano or Tensorflow, they will report a variable called "loss" per epoch.
How should I interpret this variable? Higher loss is better or worse, or what does it mean for the final performance (accuracy) of my neural network?
The lower the loss, the better a model (unless the model has over-fitted to the training data). The loss is calculated on training and validation and its interperation is how well the model is doing for these two sets. Unlike accuracy, loss is not a percentage. It is a summation of the errors made for each example in training or validation sets.
In the case of neural networks, the loss is usually negative log-likelihood and residual sum of squares for classification and regression respectively. Then naturally, the main objective in a learning model is to reduce (minimize) the loss function's value with respect to the model's parameters by changing the weight vector values through different optimization methods, such as backpropagation in neural networks.
Loss value implies how well or poorly a certain model behaves after each iteration of optimization. Ideally, one would expect the reduction of loss after each, or several, iteration(s).
The accuracy of a model is usually determined after the model parameters are learned and fixed and no learning is taking place. Then the test samples are fed to the model and the number of mistakes (zero-one loss) the model makes are recorded, after comparison to the true targets. Then the percentage of misclassification is calculated.
For example, if the number of test samples is 1000 and model classifies 952 of those correctly, then the model's accuracy is 95.2%.
There are also some subtleties while reducing the loss value. For instance, you may run into the problem of over-fitting in which the model "memorizes" the training examples and becomes kind of ineffective for the test set. Over-fitting also occurs in cases where you do not employ a regularization, you have a very complex model (the number of free parameters W is large) or the number of data points N is very low.
They are two different metrics to evaluate your model's performance usually being used in different phases.
Loss is often used in the training process to find the "best" parameter values for your model (e.g. weights in neural network). It is what you try to optimize in the training by updating weights.
Accuracy is more from an applied perspective. Once you find the optimized parameters above, you use this metrics to evaluate how accurate your model's prediction is compared to the true data.
Let us use a toy classification example. You want to predict gender from one's weight and height. You have 3 data, they are as follows:(0 stands for male, 1 stands for female)
y1 = 0, x1_w = 50kg, x2_h = 160cm;
y2 = 0, x2_w = 60kg, x2_h = 170cm;
y3 = 1, x3_w = 55kg, x3_h = 175cm;
You use a simple logistic regression model that is y = 1/(1+exp-(b1*x_w+b2*x_h))
How do you find b1 and b2? you define a loss first and use optimization method to minimize the loss in an iterative way by updating b1 and b2.
In our example, a typical loss for this binary classification problem can be:
(a minus sign should be added in front of the summation sign)
We don't know what b1 and b2 should be. Let us make a random guess say b1 = 0.1 and b2 = -0.03. Then what is our loss now?
so the loss is
Then you learning algorithm (e.g. gradient descent) will find a way to update b1 and b2 to decrease the loss.
What if b1=0.1 and b2=-0.03 is the final b1 and b2 (output from gradient descent), what is the accuracy now?
Let's assume if y_hat >= 0.5, we decide our prediction is female(1). otherwise it would be 0. Therefore, our algorithm predict y1 = 1, y2 = 1 and y3 = 1. What is our accuracy? We make wrong prediction on y1 and y2 and make correct one on y3. So now our accuracy is 1/3 = 33.33%
PS: In Amir's answer, back-propagation is said to be an optimization method in NN. I think it would be treated as a way to find gradient for weights in NN. Common optimization method in NN are GradientDescent and Adam.
Just to clarify the Training/Validation/Test data sets:
The training set is used to perform the initial training of the model, initializing the weights of the neural network.
The validation set is used after the neural network has been trained. It is used for tuning the network's hyperparameters, and comparing how changes to them affect the predictive accuracy of the model. Whereas the training set can be thought of as being used to build the neural network's gate weights, the validation set allows fine tuning of the parameters or architecture of the neural network model. It's useful as it allows repeatable comparison of these different parameters/architectures against the same data and networks weights, to observe how parameter/architecture changes affect the predictive power of the network.
Then the test set is used only to test the predictive accuracy of the trained neural network on previously unseen data, after training and parameter/architecture selection with the training and validation data sets.
I have a classification and regression question on machine learning.
First question, the following dataset
http://it.tinypic.com/view.php?pic=oh3gj7&s=8#.VIjhRDGG_lF
Can we say, the data set is linearly separable?
In order to apply a linear model for classication, a transformation of the input space is not needed for this dataset, or is not possible for this dataset?
My answer is no, but I am not sure for the second, I am not sure a transformation is possible for the dataset.
Second question about regression probl:
Give the following data set f : R -> R
http://it.tinypic.com/view.php?pic=madsmr&s=8#.VIjhVjGG_lE
Can we say that :
A linear model for regression can be used to learn the function associated to this data set ?
Given this data set, it is not possible to determine an optimal conguration of the linear model?
I am reading the book of Tom Mitchell Machine learning, and Pattern Recognition and Machine Learning Bishop, but I still have trouble giving the right answer.
Thanks in advance.
Neither of this datasets can be modeled using linear classification/regression.
In case of the "input data transfromation" if only dataset is consistent (there are no two exact same points with two different labels) there always exists transformation after which data is linearly separable. In particular one can construct it with:
phi(x) = 1 iff label of x is "1"
in other words, you map all positive samples to "1" and negatives to "0", so your data is now trivialy linearly separable. Or simply map your N points into N unit vectors in R^N space in such a way that i'th point is mapped to [0 0 0 ... 1 ... 0 0 0]^T where this "1" appears at i'th place. Such dataset is trivialy linearly separable for any labeling.
I have a classification task, and I use svm_perf application.
The question is having trained the model I wonder whether it's possible to get the weight of the features.
There is an -a parametes which outputs the alphas, honestly I don't recall alphas in SVM I think the weights are always w.
If you are implementing linear SVM, there is a Python script based on the model file output by svm_learn and svm_perf_learn. To be more specific, the weight is just w=SUM_i (y_i*alpha_i*sv_i) where sv_i is the support vector, y_i is the category from trained sample.
If you are using non linear SVM, I don't think the weights coefficients are directly related to the input space. Yet you can get the decision function:
f(x) = sgn( SUM_i (alpha_i*y_i*K(sv_i,x)) + b );
where K is your kernel function.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
When we have to predict the value of a categorical (or discrete) outcome we use logistic regression. I believe we use linear regression to also predict the value of an outcome given the input values.
Then, what is the difference between the two methodologies?
Linear regression output as probabilities
It's tempting to use the linear regression output as probabilities but it's a mistake because the output can be negative, and greater than 1 whereas probability can not. As regression might actually
produce probabilities that could be less than 0, or even bigger than
1, logistic regression was introduced.
Source: http://gerardnico.com/wiki/data_mining/simple_logistic_regression
Outcome
In linear regression, the outcome (dependent variable) is continuous.
It can have any one of an infinite number of possible values.
In logistic regression, the outcome (dependent variable) has only a limited number of possible values.
The dependent variable
Logistic regression is used when the response variable is categorical in nature. For instance, yes/no, true/false, red/green/blue,
1st/2nd/3rd/4th, etc.
Linear regression is used when your response variable is continuous. For instance, weight, height, number of hours, etc.
Equation
Linear regression gives an equation which is of the form Y = mX + C,
means equation with degree 1.
However, logistic regression gives an equation which is of the form
Y = eX + e-X
Coefficient interpretation
In linear regression, the coefficient interpretation of independent variables are quite straightforward (i.e. holding all other variables constant, with a unit increase in this variable, the dependent variable is expected to increase/decrease by xxx).
However, in logistic regression, depends on the family (binomial, Poisson,
etc.) and link (log, logit, inverse-log, etc.) you use, the interpretation is different.
Error minimization technique
Linear regression uses ordinary least squares method to minimise the
errors and arrive at a best possible fit, while logistic regression
uses maximum likelihood method to arrive at the solution.
Linear regression is usually solved by minimizing the least squares error of the model to the data, therefore large errors are penalized quadratically.
Logistic regression is just the opposite. Using the logistic loss function causes large errors to be penalized to an asymptotically constant.
Consider linear regression on categorical {0, 1} outcomes to see why this is a problem. If your model predicts the outcome is 38, when the truth is 1, you've lost nothing. Linear regression would try to reduce that 38, logistic wouldn't (as much)2.
In linear regression, the outcome (dependent variable) is continuous. It can have any one of an infinite number of possible values. In logistic regression, the outcome (dependent variable) has only a limited number of possible values.
For instance, if X contains the area in square feet of houses, and Y contains the corresponding sale price of those houses, you could use linear regression to predict selling price as a function of house size. While the possible selling price may not actually be any, there are so many possible values that a linear regression model would be chosen.
If, instead, you wanted to predict, based on size, whether a house would sell for more than $200K, you would use logistic regression. The possible outputs are either Yes, the house will sell for more than $200K, or No, the house will not.
Just to add on the previous answers.
Linear regression
Is meant to resolve the problem of predicting/estimating the output value for a given element X (say f(x)). The result of the prediction is a continuous function where the values may be positive or negative. In this case you normally have an input dataset with lots of examples and the output value for each one of them. The goal is to be able to fit a model to this data set so you are able to predict that output for new different/never seen elements. Following is the classical example of fitting a line to set of points, but in general linear regression could be used to fit more complex models (using higher polynomial degrees):
Resolving the problem
Linear regression can be solved in two different ways:
Normal equation (direct way to solve the problem)
Gradient descent (Iterative approach)
Logistic regression
Is meant to resolve classification problems where given an element you have to classify the same in N categories. Typical examples are, for example, given a mail to classify it as spam or not, or given a vehicle find to which category it belongs (car, truck, van, etc ..). That's basically the output is a finite set of discrete values.
Resolving the problem
Logistic regression problems could be resolved only by using Gradient descent. The formulation in general is very similar to linear regression the only difference is the usage of different hypothesis function. In linear regression the hypothesis has the form:
h(x) = theta_0 + theta_1*x_1 + theta_2*x_2 ..
where theta is the model we are trying to fit and [1, x_1, x_2, ..] is the input vector. In logistic regression the hypothesis function is different:
g(x) = 1 / (1 + e^-x)
This function has a nice property, basically it maps any value to the range [0,1] which is appropiate to handle propababilities during the classificatin. For example in case of a binary classification g(X) could be interpreted as the probability to belong to the positive class. In this case normally you have different classes that are separated with a decision boundary which basically a curve that decides the separation between the different classes. Following is an example of dataset separated in two classes.
You can also use the below code to generate the linear regression
curve
q_df = details_df
# q_df = pd.get_dummies(q_df)
q_df = pd.get_dummies(q_df, columns=[
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9"
])
q_1_df = q_df["1"]
q_df = q_df.drop(["2", "3", "4", "5"], axis=1)
(import statsmodels.api as sm)
x = sm.add_constant(q_df)
train_x, test_x, train_y, test_y = sklearn.model_selection.train_test_split(
x, q3_rechange_delay_df, test_size=0.2, random_state=123 )
lmod = sm.OLS(train_y, train_x).fit() lmod.summary()
lmod.predict()[:10]
lmod.get_prediction().summary_frame()[:10]
sm.qqplot(lmod.resid,line="q") plt.title("Q-Q plot of Standardized
Residuals") plt.show()
Simply put, linear regression is a regression algorithm, which outpus a possible continous and infinite value; logistic regression is considered as a binary classifier algorithm, which outputs the 'probability' of the input belonging to a label (0 or 1).
The basic difference :
Linear regression is basically a regression model which means its will give a non discreet/continuous output of a function. So this approach gives the value. For example : given x what is f(x)
For example given a training set of different factors and the price of a property after training we can provide the required factors to determine what will be the property price.
Logistic regression is basically a binary classification algorithm which means that here there will be discreet valued output for the function . For example : for a given x if f(x)>threshold classify it to be 1 else classify it to be 0.
For example given a set of brain tumour size as training data we can use the size as input to determine whether its a benine or malignant tumour. Therefore here the output is discreet either 0 or 1.
*here the function is basically the hypothesis function
They are both quite similar in solving for the solution, but as others have said, one (Logistic Regression) is for predicting a category "fit" (Y/N or 1/0), and the other (Linear Regression) is for predicting a value.
So if you want to predict if you have cancer Y/N (or a probability) - use logistic. If you want to know how many years you will live to - use Linear Regression !
Regression means continuous variable, Linear means there is linear relation between y and x.
Ex= You are trying to predict salary from no of years of experience. So here salary is independent variable(y) and yrs of experience is dependent variable(x).
y=b0+ b1*x1
We are trying to find optimum value of constant b0 and b1 which will give us best fitting line for your observation data.
It is a equation of line which gives continuous value from x=0 to very large value.
This line is called Linear regression model.
Logistic regression is type of classification technique. Dnt be misled by term regression. Here we predict whether y=0 or 1.
Here we first need to find p(y=1) (wprobability of y=1) given x from formuale below.
Probaibility p is related to y by below formuale
Ex=we can make classification of tumour having more than 50% chance of having cancer as 1 and tumour having less than 50% chance of having cancer as 0.
Here red point will be predicted as 0 whereas green point will be predicted as 1.
Cannot agree more with the above comments.
Above that, there are some more differences like
In Linear Regression, residuals are assumed to be normally distributed.
In Logistic Regression, residuals need to be independent but not normally distributed.
Linear Regression assumes that a constant change in the value of the explanatory variable results in constant change in the response variable.
This assumption does not hold if the value of the response variable represents a probability (in Logistic Regression)
GLM(Generalized linear models) does not assume a linear relationship between dependent and independent variables. However, it assumes a linear relationship between link function and independent variables in logit model.
| Basis | Linear | Logistic |
|-----------------------------------------------------------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Basic | The data is modelled using a straight line. | The probability of some obtained event is represented as a linear function of a combination of predictor variables. |
| Linear relationship between dependent and independent variables | Is required | Not required |
| The independent variable | Could be correlated with each other. (Specially in multiple linear regression) | Should not be correlated with each other (no multicollinearity exist). |
In short:
Linear Regression gives continuous output. i.e. any value between a range of values.
Logistic Regression gives discrete output. i.e. Yes/No, 0/1 kind of outputs.
To put it simply, if in linear regression model more test cases arrive which are far away from the threshold(say =0.5)for a prediction of y=1 and y=0. Then in that case the hypothesis will change and become worse.Therefore linear regression model is not used for classification problem.
Another Problem is that if the classification is y=0 and y=1, h(x) can be > 1 or < 0.So we use Logistic regression were 0<=h(x)<=1.
Logistic Regression is used in predicting categorical outputs like Yes/No, Low/Medium/High etc. You have basically 2 types of logistic regression Binary Logistic Regression (Yes/No, Approved/Disapproved) or Multi-class Logistic regression (Low/Medium/High, digits from 0-9 etc)
On the other hand, linear regression is if your dependent variable (y) is continuous.
y = mx + c is a simple linear regression equation (m = slope and c is the y-intercept). Multilinear regression has more than 1 independent variable (x1,x2,x3 ... etc)
In linear regression the outcome is continuous whereas in logistic regression, the outcome has only a limited number of possible values(discrete).
example:
In a scenario,the given value of x is size of a plot in square feet then predicting y ie rate of the plot comes under linear regression.
If, instead, you wanted to predict, based on size, whether the plot would sell for more than 300000 Rs, you would use logistic regression. The possible outputs are either Yes, the plot will sell for more than 300000 Rs, or No.
In case of Linear Regression the outcome is continuous while in case of Logistic Regression outcome is discrete (not continuous)
To perform Linear regression we require a linear relationship between the dependent and independent variables. But to perform Logistic regression we do not require a linear relationship between the dependent and independent variables.
Linear Regression is all about fitting a straight line in the data while Logistic Regression is about fitting a curve to the data.
Linear Regression is a regression algorithm for Machine Learning while Logistic Regression is a classification Algorithm for machine learning.
Linear regression assumes gaussian (or normal) distribution of dependent variable. Logistic regression assumes binomial distribution of dependent variable.
The basic difference between Linear Regression and Logistic Regression is :
Linear Regression is used to predict a continuous or numerical value but when we are looking for predicting a value that is categorical Logistic Regression come into picture.
Logistic Regression is used for binary classification.