Quadratic cost using np linalg norm doesn't work - drake

Why does this line:
prog.AddQuadraticErrorCost(np.identity(len(q)), q0, q)
works.
But this:
prog.AddCost(np.linalg.norm(q_variables - q_nominal)**2)
RuntimeError: Expression pow(sqrt((pow(q(0), 2) + pow(q(2), 2) +
pow(q(4), 2) + pow(q(6), 2) + pow(q(7), 2) + pow(q(8), 2) + pow((-1 +
q(5)), 2) + pow((-0.59999999999999998 + q(1)), 2) + pow((1.75 + q(3)),
2))), 2) is not a polynomial. ParseCost does not support
non-polynomial expression.
does not?
Are the expressions not mathematically identical?

They are mathematically identical, but our symbolic engine is not yet powerful enough to recognize that sqrt(x)**2 should be simplified as x.
You can also write the expression using the symbolic form
prog.AddQuadraticCost((q-q0).dot(q-q0))
if you prefer readable code.

Related

Simplification of expressions involving abstract derivatives in maxima

I'm trying to get maxima to perform some "abstract" Taylor series expansions, and I'm running into a simplification issue. A prototype of the problem might be the finite-difference analog of the gradient,
g(x1,dx1) := (f(x1+dx1) - f(x1))/dx1; /* dx1 is small */
taylor(g(x1,dx1), [dx1], [0], 0);
for which maxima returns
So far so good. But now try the finite-difference analog of the second derivative (Hessian),
h(x1,dx1) := (f(x1+dx1) - 2*f(x1) + f(x1-dx1))/dx1^2;
taylor(h(x1,dx1), dx1, 0, 0);
for which I get
which is not nearly as helpful.
A prototype of the "real" problem I want to solve is to compute the low-order errors of the finite-difference approximation to ∂^2 f/(∂x1 ∂x2),
(f(x1+dx1, x2+dx2) - f(x1+dx1, x2) - f(x1, x2+dx2) + f(x1, x2))/(dx1*dx2)
and to collect the terms up to second order (which involves up to 4th derivatives of f). Without reasonably effective simplification I suspect it will be easier to do by hand than by computer algebra, so I am wondering what can be done to coax maxima into doing the simplification for me.
Consider this example. It uses Barton Willis' pdiff package. I
simplified notation a bit: moved center to [0, 0] and introduced
notation for partial derivatives.
(%i1) load("pdiff") $
(%i2) matchdeclare([n, m], integerp) $
(%i3) tellsimpafter(f(0, 0), 'f00) $
(%i4) tellsimpafter(pderivop(f,n,m)(0,0), concat('f, n, m)) $
(%i5) e: (f(dx, dy) - f(dx, -dy) - f(-dx, dy) + f(-dx, -dy))/(4*dx*dy)$
(%i6) taylor(e, [dx, dy], [0, 0], 3);
2 2
f31 dx + f13 dy
(%o6)/T/ f11 + ----------------- + . . .
6

How to compile latex equations in markdown with Pandoc?

I am having trouble compile markdown files with latex equation using Pandoc. The code is the following:
+ Regression Specification:
+ $y_{gpt} = \alpha+\beta_1 * Parallel+ \gamma * Prov_p+ \delta *
Year_t+\epsiolon_{gpt}$
I get an undefined control sequence error.
You had a typo in \epsilon, this works:
+ Regression Specification:
+ $y_{gpt} = \alpha+\beta_1 * Parallel+ \gamma * Prov_p+ \delta *
Year_t+\epsilon_{gpt}$
(btw. not sure why you have those plus sings at the beginning: they produce an ordered list which seems odd in this context.)

Lens distortion model vs correction model

The lens model in OpenCV is a sort of distortion model which distorts an ideal position to the corresponding real (distorted) position:
x_corrected = x_distorted ( 1 + k_1 * r^2 + k_2 * r^4 + ...),
y_corrected = y_distorted ( 1 + k_1 * r^2 + k_2 * r^4 + ...),
where r^2 = x_distorted^2 + y_distorted^2 in the normalized image coordinate (the tangential distortion is omitted for simplicity). This is also found in Z. Zhang: "A Flexible New Technique for Camera Calibration," TPAMI 2000, and also in "Camera Calibration Toolbox for Matlab" by Bouguet.
On the other hand, Bradski and Kaehler: "Learning OpenCV" introduces in p.376 the lens model as a correction model which corrects a distorted position to the ideal position:
x_distorted = x_corrected ( 1 + k'_1 * r'^2 + k'_2 * r'^4 + ...),
y_distorted = y_corrected ( 1 + k'_1 * r'^2 + k'_2 * r'^4 + ...),
where r'^2 = x_corrected^2 + y_corrected^2 in the normalized image coordinate.
Hartley and Zisserman: "Multiple View Geometry in Computer Vision" also describes this model.
I understand the both correction and distortion models have advantages and disadvantages in practice. For example, the former makes correction of detected feature point locations easy, while the latter makes the undistortion of the entire image straightforward.
My question is, why they share the same polynomial expression, while they are supposed to be the inverse of each other? I could find this document evaluating the inversibility, but its theoretical background is not clear to me.
Thank you for your help.
I think the short answer is: they are just different models, so they're not supposed to be each other's inverse. Like you already wrote, each has its own advantages and disadvantages.
As to inversibility, this depends on the order of the polynomial. A 2nd-order (quadratic) polynomial is easily inverted. A 4th-order requires some more work, but can still be analytically inverted. But as soon as you add a 6th-order term, you'll probably have to resort to numeric methods to find the inverse, because a 5th-order or higher polynomial is not analytically invertible in the general case.
According to taylor expansion every formula in world can be written as c0 + c1*x + c2*x^2 + c3*x^3 + c4*x^4...
The goal is just discover the constants.
In our particular case the expression must be symmetric in x and -x (even function) so the constants in x, x^3, x^5, x^7 are equal to zero.

create a loop in random forest

I receive some bits of advice from a friend about how to improve the model, but I can't follow this instruction very well. Would someone please help me to work it out? Below is my prediction model
fit1 <- cforest(SWL ~ affect + negemo+ future+swear+sad
+negate+sexual+death + filler+leisure + conj+ funct +discrep + i
+future + past + bio + body + cogmech + death + cause + quant
+ future +incl + motion + sad + tentat + excl+insight +percept +posemo
+ppron +quant + relativ + space + article + age + s_all + s_sad + gender
, data = trainset1,
controls=cforest_unbiased(ntree=500, mtry= 1))
test_predict1<-predict(fit1, newdata=testset1, type='response')
My friend's advice:
You simply run a loop for e.g. 500 times. In each loop you create a temporary copy of the data.frame and then switch all the SWL values around randomly. Then you run the whole process on that messed up data.frame and finally save the total accuracy at the end. When it finishes you have e.g. 500 saved accuracies that you can compare to your best result.

Matrix of several perspective transformations

I am writing some image processing program using OpenCV.
I need to transform the image using several perspective transformations.
Perspective transformation is defined by the matrix. I know, that we can get complex affine transform by multiplication of the simple transform matriсes (rotation, translation, etc.).
But when I tried to multiply two perspective transformation matrices, I didn't get the transformation matrix, that corresponds to the consequently used first and second matrix.
So, how can I get the matrix of several consequent perspective transformations?
Let you have two perspective matrices C:(x,y)->(u,v) and D:(u,v)->(r,g):
And you try to get M:(x,y)->(r,g)
You should substitute ui and vi from (1),(2) to the equations (3),(4).
 ui = (c00*xi + c01*yi + c02) / (c20*xi + c21*yi + c22) (1)
 vi = (c10*xi + c11*yi + c12) / (c20*xi + c21*yi + c22) (2)
 ri = (d00*ui + d01*vi + d02) / (d20*ui + d21*vi + d22) (3)
 gi = (d10*ui + d11*vi + d12) / (d20*ui + d21*vi + d22) (4)
After that you can see that M = D*C

Resources