I am conducting a bayesian analysis in winbugs. Here is my model:
y[i] ~ dnorm( mu[i], tau )
b[i] ~ dnorm(0.0, alpha)
mu_i = 1- (beta1*x1 + beta2*x2 + ... + beta20*x20) + b[i]
where b[i] is the i-th random effect. I am wondering how I can specify prior distributions for tau, alpha and the betas. What points are considered? Any help would be greatly appreciated.
Cheers
Normally, you use dgamma for prior distribution of precision parameter:
tau ~ dgamma(0.01, 0.01)
alpha ~ dgamma(0.01, 0.01)
For regression coefficients, I would use something like flat normal:
beta ~ dnorm(0, 1/(100000^2))
More info on regression coefficients here.
Related
I found this question online. Can someone explain in details please, why using OLS is better? Is it only because the number of samples is not enough? Also, why not use all the 1000 samples to estimate the prior distribution?
We have 1000 randomly sampled data points. The goal is to try to build
a regression model with one response variable from k regressor
variables. Which is better? 1. (Bayesian Regression) Using the first
500 samples to estimate the parameters of an assumed prior
distribution and then use the last 500 samples to update the prior to
a posterior distribution with posterior estimates to be used in the
final regression model. 2. (OLS Regression) Use a simple ordinary
least squares regression model with all 1000 regressor variables
"Better" is always a matter of opinion, and it greatly depends on context.
Advantages to a frequentist OLS approach: Simpler, faster, more accessible to a wider audience (and therefore less to explain). A wise professor of mine used to say "You don't need to build an atom smasher when a flyswatter will do the trick."
Advantages to an equivalent Bayesian approach: More flexible to further model development, can directly model posteriors of derived/calculated quantities (there are more, but these have been my motivations for going Bayesian with a given analysis). Note the word "equivalent" - there are things you can do in a Bayesian framework that you can't do within a frequentist approach.
And hey, here's a exploration in R, first simulating data, then using a typical OLS approach.
N <- 1000
x <- 1:N
epsilon <- rnorm(N, 0, 1)
y <- x + epsilon
summary(lm(y ~ x))
##
## Call:
## lm(formula = y ~ x)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.9053 -0.6723 0.0116 0.6937 3.7880
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.0573955 0.0641910 0.894 0.371
## x 0.9999997 0.0001111 9000.996 <2e-16 ***
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
##
## Residual standard error: 1.014 on 998 degrees of freedom
## Multiple R-squared: 1, Adjusted R-squared: 1
## F-statistic: 8.102e+07 on 1 and 998 DF, p-value: < 2.2e-16
...and here's an equivalent Bayesian regression, using non-informative priors on the regression parameters and all 1000 data points.
library(R2jags)
cat('model {
for (i in 1:N){
y[i] ~ dnorm(y.hat[i], tau)
y.hat[i] <- a + b * x[i]
}
a ~ dnorm(0, .0001)
b ~ dnorm(0, .0001)
tau <- pow(sigma, -2)
sigma ~ dunif(0, 100)
}', file="test.jags")
test.data <- list(x=x,y=y,N=1000)
test.jags.out <- jags(model.file="test.jags", data=test.data,
parameters.to.save=c("a","b","tau","sigma"), n.chains=3, n.iter=10000)
test.jags.out$BUGSoutput$mean$a
## [1] 0.05842661
test.jags.out$BUGSoutput$sd$a
## [1] 0.06606705
test.jags.out$BUGSoutput$mean$b
## [1] 0.9999976
test.jags.out$BUGSoutput$sd$b
## [1] 0.0001122533
Note that the parameter estimates and standard errors/standard deviations are essentially equivalent!
Now here's another Bayesian regression, using the first 500 data points to estimate the priors and then the last 500 to estimate posteriors.
test.data <- list(x=x[1:500],y=y[1:500],N=500)
test.jags.out <- jags(model.file="test.jags", data=test.data,
parameters.to.save=c("a","b","tau","sigma"), n.chains=3, n.iter=10000)
cat('model {
for (i in 1:N){
y[i] ~ dnorm(y.hat[i], tau)
y.hat[i] <- a + b * x[i]
}
a ~ dnorm(a_mn, a_prec)
b ~ dnorm(b_mn, b_prec)
a_prec <- pow(a_sd, -2)
b_prec <- pow(b_sd, -2)
tau <- pow(sigma, -2)
sigma ~ dunif(0, 100)
}', file="test.jags1")
test.data1 <- list(x=x[501:1000],y=y[501:1000],N=500,
a_mn=test.jags.out$BUGSoutput$mean$a,a_sd=test.jags.out$BUGSoutput$sd$a,
b_mn=test.jags.out$BUGSoutput$mean$b,b_sd=test.jags.out$BUGSoutput$sd$b)
test.jags.out1 <- jags(model.file="test.jags1", data=test.data1,
parameters.to.save=c("a","b","tau","sigma"), n.chains=3, n.iter=10000)
test.jags.out1$BUGSoutput$mean$a
## [1] 0.01491162
test.jags.out1$BUGSoutput$sd$a
## [1] 0.08513474
test.jags.out1$BUGSoutput$mean$b
## [1] 1.000054
test.jags.out1$BUGSoutput$sd$b
## [1] 0.0001201778
Interestingly, the inferences are similar to the OLS results, but not nearly as much so. This leads me to suspect that the 500 data points used to train the prior are not carrying as much weight in the analysis as the last 500, and the prior is effectively getting washed out, though I'm not sure on this point.
Regardless, I can't think of a reason not to use all 1000 data points (and non-informative priors) either, particularly since I suspect the 500+500 is using the first 500 and last 500 differently.
So perhaps, the answer to all of this is: I trust the OLS and 1000-point Bayesian results more than the 500+500, and OLS is simpler.
In my opinion is not a matter of better but a matter of which inference approach you're comfortable with.
You must remember that OLS comes from the frequentist school of inference and estimation is donde ML process which for this particular problem coincides with a geometric argument of distance minimization (in my personal opinion it is very odd, as supposedly we aare dealing with a rondom phenomena).
On the other hand, in the bayesian approach, inference is done through posterior distribution which is the multiplication of the prior (that represents the decision maker's previous information about the phenom) and the likelihood.
Again, the question is a matter of what inference approach you're comfortable with.
The lens model in OpenCV is a sort of distortion model which distorts an ideal position to the corresponding real (distorted) position:
x_corrected = x_distorted ( 1 + k_1 * r^2 + k_2 * r^4 + ...),
y_corrected = y_distorted ( 1 + k_1 * r^2 + k_2 * r^4 + ...),
where r^2 = x_distorted^2 + y_distorted^2 in the normalized image coordinate (the tangential distortion is omitted for simplicity). This is also found in Z. Zhang: "A Flexible New Technique for Camera Calibration," TPAMI 2000, and also in "Camera Calibration Toolbox for Matlab" by Bouguet.
On the other hand, Bradski and Kaehler: "Learning OpenCV" introduces in p.376 the lens model as a correction model which corrects a distorted position to the ideal position:
x_distorted = x_corrected ( 1 + k'_1 * r'^2 + k'_2 * r'^4 + ...),
y_distorted = y_corrected ( 1 + k'_1 * r'^2 + k'_2 * r'^4 + ...),
where r'^2 = x_corrected^2 + y_corrected^2 in the normalized image coordinate.
Hartley and Zisserman: "Multiple View Geometry in Computer Vision" also describes this model.
I understand the both correction and distortion models have advantages and disadvantages in practice. For example, the former makes correction of detected feature point locations easy, while the latter makes the undistortion of the entire image straightforward.
My question is, why they share the same polynomial expression, while they are supposed to be the inverse of each other? I could find this document evaluating the inversibility, but its theoretical background is not clear to me.
Thank you for your help.
I think the short answer is: they are just different models, so they're not supposed to be each other's inverse. Like you already wrote, each has its own advantages and disadvantages.
As to inversibility, this depends on the order of the polynomial. A 2nd-order (quadratic) polynomial is easily inverted. A 4th-order requires some more work, but can still be analytically inverted. But as soon as you add a 6th-order term, you'll probably have to resort to numeric methods to find the inverse, because a 5th-order or higher polynomial is not analytically invertible in the general case.
According to taylor expansion every formula in world can be written as c0 + c1*x + c2*x^2 + c3*x^3 + c4*x^4...
The goal is just discover the constants.
In our particular case the expression must be symmetric in x and -x (even function) so the constants in x, x^3, x^5, x^7 are equal to zero.
So I have something that looks like the following:
However, I am having real trouble integrating the data on the other side of this decision line to get my errors.
In general, given analytic form of the decision boundary you could compute the integrals exactly. However, why not use monte carlo which is fast, simple and generic (will work for any distributions and decision boundaries). All you have to do is repeatedly sample from your gaussians, check if the sampled point is on the correct side (N_c) or incorrect (N_i) and in the limit you will get your integrals from
INTEGRAL_of_distributions_being_on_correct_side ~ N_c / (N_c + N_i)
INTEGRAL_of_distributions_being_on_incorrect_side ~ N_i / (N_c + N_i)
thus in pseudo code:
N_c = 0
N_i = 0
for i=1 to N do
y ~ P({-, +}) # sample distribution
x ~ P(X|y) # sample point from given class
if side_of_decision(x) == y then
N_c += 1
else
N_i += 1
end
end
return N_c, N_i
In your case P({-, +}) is probably just 50-50 chance and P(X|-) and P(X|+) are your two Gaussians.
I've gone through few courses of Professor Andrew for machine Learning and viewed the transcript for Logistic Regression using Newton's method. However when implementing the logistic regression using gradient descent I face certain issue.
The graph generated is not convex.
My code goes as follows:
I am using the vectorized implementation of the equation.
%1. The below code would load the data present in your desktop to the octave memory
x=load('ex4x.dat');
y=load('ex4y.dat');
%2. Now we want to add a column x0 with all the rows as value 1 into the matrix.
%First take the length
m=length(y);
x=[ones(m,1),x];
alpha=0.1;
max_iter=100;
g=inline('1.0 ./ (1.0 + exp(-z))');
theta = zeros(size(x(1,:)))'; % the theta has to be a 3*1 matrix so that it can multiply by x that is m*3 matrix
j=zeros(max_iter,1); % j is a zero matrix that is used to store the theta cost function j(theta)
for num_iter=1:max_iter
% Now we calculate the hx or hypothetis, It is calculated here inside no. of iteration because the hupothesis has to be calculated for new theta for every iteration
z=x*theta;
h=g(z); % Here the effect of inline function we used earlier will reflect
j(num_iter)=(1/m)*(-y'* log(h) - (1 - y)'*log(1-h)) ; % This formula is the vectorized form of the cost function J(theta) This calculates the cost function
j
grad=(1/m) * x' * (h-y); % This formula is the gradient descent formula that calculates the theta value.
theta=theta - alpha .* grad; % Actual Calculation for theta
theta
end
The code per say doesn't give any error but does not produce proper convex graph.
I shall be glad if any body could point out the mistake or share insight on what's causing the problem.
thanks
2 things you need to look into:
Machine Learning involves learning patterns from data. If your files ex4x.dat and ex4y.dat are randomly generated, it won't have patterns that you can learn.
You have used variables like g, h, i, j which make debugging difficult. Since it's a very small program, it might be a better idea to rewrite it.
Here's my code that gives the convex plot
clc; clear; close all;
load q1x.dat;
load q1y.dat;
X = [ones(size(q1x, 1),1) q1x];
Y = q1y;
m = size(X,1);
n = size(X,2)-1;
%initialize
theta = zeros(n+1,1);
thetaold = ones(n+1,1);
while ( ((theta-thetaold)'*(theta-thetaold)) > 0.0000001 )
%calculate dellltheta
dellltheta = zeros(n+1,1);
for j=1:n+1,
for i=1:m,
dellltheta(j,1) = dellltheta(j,1) + [Y(i,1) - (1/(1 + exp(-theta'*X(i,:)')))]*X(i,j);
end;
end;
%calculate hessian
H = zeros(n+1, n+1);
for j=1:n+1,
for k=1:n+1,
for i=1:m,
H(j,k) = H(j,k) -[1/(1 + exp(-theta'*X(i,:)'))]*[1-(1/(1 + exp(-theta'*X(i,:)')))]*[X(i,j)]*[X(i,k)];
end;
end;
end;
thetaold = theta;
theta = theta - inv(H)*dellltheta;
(theta-thetaold)'*(theta-thetaold)
end
I get the following values of error after iterations:
2.8553
0.6596
0.1532
0.0057
5.9152e-06
6.1469e-12
Which when plotted looks like:
I'm trying to minimize my function "FunctionToMinimize", which is defined as follows:
FunctionToMinimize[a_, b_, c_, d_] := (2.35*Sqrt[
Variance[1/2*
(a*#1 + b*#2 + c*#3 + d*#4)
]
]
/Mean[1/2*(a*#1 + b*#2 + c*#3 + d*#4)])
&[DataList1[[1 ;; 1000]],DataList2[[1 ;; 1000]],
DataList3[[1 ;; 1000]], DataList4[[1 ;; 1000]]]
The four parameters a,b,c and d are restricted to be somewhere between 0.5 and 1.5. My Problem is now, that if I call
NMinimize[{Funktion[w, x, y, z],
0.75 < w < 1.25 && 0.75 < y < 1.25 && 0.75 < x < 1.25 && 0.75 < z < 1.25},
{w, x, y, z}]
the Mathematica kernel shuts down because it has not enough memory. If I use only the first 100 entries in my DataLists, it will find me results (in 4.1 sec), but if I use DataList[[1;;1000]] or even more entries, the kernel crashes.
Has anybody an idea, why the NMinimize function uses so much memory? I would need to have the minimization for 150'000 events in each list...
Thanks for your answer,
Cheers,
Andreas
I would guess (but haven't in any way checked) that the problem is that on each call to your function, Mathematica is trying to construct a symbolic expression derived from all your data and that occupies much more memory than you'd expect.
Regardless, the good news -- if you haven't long since moved on and forgotten about this problem -- is that you can turn the function into something much simpler.
So, first of all, the 2.35 and the 1/2s just change your function by a constant factor and don't affect where the minimum is, so let's ignore them. Next, your function is always non-negative, so minimizing it is the same as minimizing its square, so let's do that.
So now you're trying to minimize var(aw+bx+cy+dz)/mean(aw+bx+cy+dz)^2 where w,x,y,z are (perhaps quite long) vectors.
Now your numerator and denominator are both just quadratic forms in a,b,c,d whose coefficients depend (in fixed ways) on those vectors. Specifically, suppose your vectors have length N. Then your function is just
[sum(aw+bx+cy+dz)^2/N - sum(aw+bx+cy+dz)^2/N^2] / (sum(aw+bx+cy+dz)^2/N^2)
which you might prefer to write as N sum(aw+bx+cy+dz)^2 / sum(aw+bx+cy+dz)^2 - 1
and in that fraction, e.g., the coefficient of bc in the numerator is 2 sum(xy), and the coefficient in the denominator is 2 sum(x) sum(y).
So you can take your big vectors, compute the relevant coefficients once, and then just ask Mathematica to optimize a function of the form (quadratic / quadratic), which should be pretty painless.