Why isn't the use of the kinematic coupling matrix in unit tests for mobilizers more explicit? - drake

I am wondering what the reason is for the kinematic coupling matrix to not be included more explicitly in unit tests for mobilizers? In particular,perhaps something in between MapVelocityToQDotAndBack and KinematicMapping tests. How do the unit tests ensure that the equation q̇ = N(q)⋅v is used when using MapVelocityToQDot?

That's a good point. We test that MapVelocityToQdot() and MapQdotToVelocity() are consistent, and we check that the N matrix is right, but we don't verify that the Map...() functions are consistent with the N matrix. That would be a good addition. If you want to file an issue describing the problem, please do. Even better if you want to submit a PR with the missing tests, that would be most welcome!

Related

Workflow for "Through-Contact" Trajectory Optimization in Drake

I would like ask what would be an appropriate workflow for solving a trajectory optimization problem that takes into account contact interactions between different bodies in an MBP in Drake. Problems like the ones addressed in this paper. I've tried using DirectCollocation class but apparently it does not consider contact forces as optimization variables. Is there anyway to incorporate them into the mathematical program generated by DirectCollocation? Thanks.
There are several things to consider when you plan through contact
The decision variables are not only the state/control any more, but should include the contact forces.
The constraints are not only the dynamic constraint x[n+1] = f(x[n], u[n]), but x[n+1] = f(x[n], u[n], λ[n]) where λ is the contact force, and also the complementarity constraint 0≤ λ ⊥ ϕ(q) ≥ 0
Instead of using direct collocation (which assumes the state trajectory being a piecewise cubic spline), Michael's paper uses just backward Euler integration, which assumes piecewise linear interpolation.
So you would need the following features:
Add the contact forces as decision variables.
Add the complementarity constraints.
Use backward Euler integration instead of DirectCollocationConstraint.
Unfortunately Drake doesn't have an implementation doing these yet. The closest match inside Drake is in this folder, and the starting point could be the StaticEquilibriumProblem. This class does feature 1 and part of feature 2 (it has the complementarity constraint, but only static equilibrium constraint instead of dynamic constraint), but doesn't do feature 3 (the backward Euler step).

drake: search for fixed points and trim points of a system

I have a LeafSystem in drake, with dynamics \dot{x} = f(x,u) written in DoCalcTimeDerivatives. The fixed points and trim points of this system are not trivial to find. Therefore, I image one would need to write a nonlinear optimization problem to find the fixed points:
find x, u;
s.t. f(x,u)=0
or
find x,u;
min f(x,u)^2
I am wondering, how should I take advantage of the dynamics that I have already written in DoCalcTimeDerivatives of the LeafSystem, and write a non-linear optimization to search over x and u to find the fixed points and trim points in drake? Some existing examples in drake would be greatly appreciated!
It's simple to write for your case (and only slightly harder to write for the general case... it's on my TODO list).
Assuming your plant supports symbolic, then looking at the trajectory optimization will give you a sense for how you might write the constraint:
https://github.com/RobotLocomotion/drake/blob/master/systems/trajectory_optimization/direct_transcription.cc#L212
(the autodiff version is just below):
fwiw, the general case from the old matlab version is here:
https://github.com/RobotLocomotion/drake/blob/last_sha_with_original_matlab/drake/matlab/solvers/FixedPointProgram.m

Find out the training error after fit()

I'm training a LinearSVC model and I want to get the training error of it. Is it possible to get it w/o evaluating it manually?
Thanks
sklearn is using liblinear for this task.
You can take a quick glance into the sources here:
self.coef_, self.intercept_, self.n_iter_ = _fit_liblinear(
X, y, self.C, self.fit_intercept, self.intercept_scaling,
self.class_weight, self.penalty, self.dual, self.verbose,
self.max_iter, self.tol, self.random_state, self.multi_class,
self.loss, sample_weight=sample_weight)
which shows that only coefficients, intercepts and number of iterations are processed by sklearn's python-API. Whatever else is available in liblinear's output is not grabbed. You can't directly read out the training-error without changing the internal code.
There might be a possible hack turning on verbose-mode, redirect the output and parse additional info available there. But this assumes the info you look for is available there and it's also hacky and i won't recommend it.
Just use the score-method. It won't be too costly compared to fitting.

bayesianoptimization in machine learning

Thanks for reading this. I am currently studying bayesoptimization problem and follow the tutorial. Please see the attachment.bayesian optimization tutorial
In page 11, about the acquisition function. Before I raise my question I need state my understanding about bayesian optimization to see if there is anything wrong.
First we need take some training points and assume them as multivariable gaussian ditribution. Then we need use acquisiont function to find the next point we want to sample. So for example we use x1....x(t) as training point then we need use acquisition function to find x(t+1) and sample it. Then we'll assume x1....x(t),x(t+1) as multivariable gaussian ditribution and then use acquisition function to find x(t+2) to sample so on and so forth.
In page 11, seems we need find the x that max the probability of improvement. f(x+) is from the sample training point(x1...xt) and easy to get. But how to get u(x) and that variance here? I don't know what is the x in the eqaution. It should be x(t+1) but the paper doesn't say that. And if it is indeed x(t+1), then how could I get its u(x(t+1))? You may say use equation at the bottom page 8, but we can use that equation on condition that we have found the the x(t+1) and put it into multivariable gaussian distribution. Now we don't know what is the next point x(t+1) so I have no way to calculate, in my opinion.
I know this is a tough question. Thanks for answering!!
In fact I have got the answer.
Indeed it is x(t+1). The direct way is we compute every u and varaince of the rest x outside of the training data and put it into acquisition function to find which one is the maximum.
This is time consuming. So we use nonlinear optimization like DIRECT to get the x that max the acquisition function instead of trying one by one

Control the solving strategy of Z3

So lets assume I have a large Problem to solve in Z3 and if i try to solve it in one take, it would take too much time. So i divide this problem in parts and solve them individually.
As a toy example lets assume that my complex problem is to solve those 3 equations:
eq1: x>5
eq2: y<6
eq3: x+y = 10
So my question is whether for example it would be possible to solve eq1 and eq2 first. And then using the result solve eq3.
assert eq1
assert eq2
(check-sat)
assert eq3
(check-sat)
(get-model)
seems to work but I m not sure whether it makes sense performancewise?
Would incremental solving maybe help me out there? Or is there any other feature of z3 that i can use to partition my problem?
The problems considered are usually satisfiability problems, i.e., the goal is to find one solution (model). A solution (model) that satisfies eq1 does not necessarily satisfy eq3, thus you can't just cut the problem in half. We would have to find all solutions (models) for eq1 so that we can replace x in eq3 with that (set of) solutions. (For example, this is what happens in Gaussian elimination after the matrix is diagonal.)

Resources