drake: search for fixed points and trim points of a system - drake

I have a LeafSystem in drake, with dynamics \dot{x} = f(x,u) written in DoCalcTimeDerivatives. The fixed points and trim points of this system are not trivial to find. Therefore, I image one would need to write a nonlinear optimization problem to find the fixed points:
find x, u;
s.t. f(x,u)=0
or
find x,u;
min f(x,u)^2
I am wondering, how should I take advantage of the dynamics that I have already written in DoCalcTimeDerivatives of the LeafSystem, and write a non-linear optimization to search over x and u to find the fixed points and trim points in drake? Some existing examples in drake would be greatly appreciated!

It's simple to write for your case (and only slightly harder to write for the general case... it's on my TODO list).
Assuming your plant supports symbolic, then looking at the trajectory optimization will give you a sense for how you might write the constraint:
https://github.com/RobotLocomotion/drake/blob/master/systems/trajectory_optimization/direct_transcription.cc#L212
(the autodiff version is just below):
fwiw, the general case from the old matlab version is here:
https://github.com/RobotLocomotion/drake/blob/last_sha_with_original_matlab/drake/matlab/solvers/FixedPointProgram.m

Related

How to map the generalized coordinate vector of the multi-body plant to the generalized coordinate vector of the rigid body tree in Drake?

A humanoid robot can be described by a kinematic tree in a URDF file. I found that the order of the elements in the generalized coordinate vectors of the rigid body tree and the multi-body plant are different. There is a code snippet online showing the way to achieve the mapping.
# q_rbt = X*q_mbp
# rigid_body_tree is an instance of RigidBodyTree()
# multi_body_plant is an instance of MultibodyPlant()
B_rbt = rigid_body_tree.B
B_mbp = multi_body_plant.MakeActuationMatrix()
X = np.dot(B_rbt[6:,:],B_mbp[6:,:].T)
However, RigidBodyTree has been deprecated in the new Drake. Hence, how can I achieve the mapping now? In addition, I am curious about why Drake does not use the same order for the generalized coordinate vectors.
You might like the workflow I used in this littledog example. I have a PR in flight to enable that workflow directly in Drake. You can track the progress on in with this issue.
In general the recommendation, and I would recommend this even if we hadn’t implemented it a particular way in Drake, is that if you have something as complicated as humanoid, don’t try to work with the vectors based on indices. Find a way to access the elements via their names, or in Drake via their joint accessors. Working with the raw vector is very error prone.

Workflow for "Through-Contact" Trajectory Optimization in Drake

I would like ask what would be an appropriate workflow for solving a trajectory optimization problem that takes into account contact interactions between different bodies in an MBP in Drake. Problems like the ones addressed in this paper. I've tried using DirectCollocation class but apparently it does not consider contact forces as optimization variables. Is there anyway to incorporate them into the mathematical program generated by DirectCollocation? Thanks.
There are several things to consider when you plan through contact
The decision variables are not only the state/control any more, but should include the contact forces.
The constraints are not only the dynamic constraint x[n+1] = f(x[n], u[n]), but x[n+1] = f(x[n], u[n], λ[n]) where λ is the contact force, and also the complementarity constraint 0≤ λ ⊥ ϕ(q) ≥ 0
Instead of using direct collocation (which assumes the state trajectory being a piecewise cubic spline), Michael's paper uses just backward Euler integration, which assumes piecewise linear interpolation.
So you would need the following features:
Add the contact forces as decision variables.
Add the complementarity constraints.
Use backward Euler integration instead of DirectCollocationConstraint.
Unfortunately Drake doesn't have an implementation doing these yet. The closest match inside Drake is in this folder, and the starting point could be the StaticEquilibriumProblem. This class does feature 1 and part of feature 2 (it has the complementarity constraint, but only static equilibrium constraint instead of dynamic constraint), but doesn't do feature 3 (the backward Euler step).

bayesianoptimization in machine learning

Thanks for reading this. I am currently studying bayesoptimization problem and follow the tutorial. Please see the attachment.bayesian optimization tutorial
In page 11, about the acquisition function. Before I raise my question I need state my understanding about bayesian optimization to see if there is anything wrong.
First we need take some training points and assume them as multivariable gaussian ditribution. Then we need use acquisiont function to find the next point we want to sample. So for example we use x1....x(t) as training point then we need use acquisition function to find x(t+1) and sample it. Then we'll assume x1....x(t),x(t+1) as multivariable gaussian ditribution and then use acquisition function to find x(t+2) to sample so on and so forth.
In page 11, seems we need find the x that max the probability of improvement. f(x+) is from the sample training point(x1...xt) and easy to get. But how to get u(x) and that variance here? I don't know what is the x in the eqaution. It should be x(t+1) but the paper doesn't say that. And if it is indeed x(t+1), then how could I get its u(x(t+1))? You may say use equation at the bottom page 8, but we can use that equation on condition that we have found the the x(t+1) and put it into multivariable gaussian distribution. Now we don't know what is the next point x(t+1) so I have no way to calculate, in my opinion.
I know this is a tough question. Thanks for answering!!
In fact I have got the answer.
Indeed it is x(t+1). The direct way is we compute every u and varaince of the rest x outside of the training data and put it into acquisition function to find which one is the maximum.
This is time consuming. So we use nonlinear optimization like DIRECT to get the x that max the acquisition function instead of trying one by one

Explanation for Values in Scharr-Filter used in OpenCV (and other places)

The Scharr-Filter is explained in Scharrs dissertation. However the values given on page 155 (167 in the pdf) are [47 162 47] / 256. Multiplying this with the derivation-filter would yield:
Yet all other references I found use
Which is roughly the same as the ones given by Scharr, scaled by a factor of 32.
Now my guess is that the range can be represented better, but I'm curious if there is an official explanation somewhere.
To get the ball rolling on this question in case no "expert" can be found...
I believe the values [3, 10, 3] ... instead of [47 162 47] / 256 ... are used simply for speed. Recall that this method is competing against the Sobel Operator whose coefficient values are are 0, and positive/negative 1's and 2's.
Even though the divisor in the division, 256 or 512, is a power of 2 and can can be performed by a shift, doing that and multiplying by 47 or 162 is going to take more time. A multiplication by 3 however can in fact be done on some RISC architectures like the IBM POWER series in a single shift-and-add operation. That is 3x = (x << 1) + x. (On these architectures, the shifter and adder are separate units and can be done independently).
I don't find it surprising that Phd paper used the more complicated and probably more precise formula; it needed to prove or demonstrate something, and the author probably wasn't totally certain or concerned that it be used and implemented alongside other methods. The purpose in the thesis was probably to have "perfect rotational symmetry". Afterwards when one decides to implement it, that person I suspect used the approximation formula and gave up a little on perfect rotational symmetry, to gain speed. That person's goal as I said was to have something that was competitive at the expense of little bit of speed for this rotational stuff.
Since I'm guessing you are willing to do work this as it is your thesis, my suggestion is to implement the original algorithm and benchmark it against both the OpenCV Scharr and Sobel code.
The other thing to try to get an "official" answer is: "Use the 'source', Luke!". The code is on github so check it out and see who added the Scharr filter there and contact that person. I won't put the person's name here, but I will say that the code was added 2010-05-11.

OpenCV Multilevel B-Spline Approximation

Hi (sorry for my english) .. i'm working in a project for University in this project i need to use the MBA (Multilevel B-Spline Approximation) algorithm to get some points (control points) of a image to use in other operations.
I'm reading a lot of papers about this algorithm, and i think i understand, but i can't writing.
The idea is: Read a image, process a image (OpenCV), then get control points of the image, use this points.
So the problem here is:
The algorithm use a set of points {(x,y,z)} , this set of points are approximated with a surface generated with the control points obtained from MBA. the set of points {(x,y,z)} represents de data we need to approximate (the image)..
So, the image is in a cv::Mat format , how can transform this format to an ordinary array to simply access to the data an manipulate...
Here are one paper with an explanation of the method:
(Paper) REGULARIZED MULTILEVEL B-SPLINE REGISTRATION
(Paper)Scattered Data Interpolation with Multilevel B-splines
(Matlab)MBA
If someone can help, maybe a guideline, idea or anything will be appreciate ..
Thanks in advance.
EDIT: Finally i wrote the algorithm in C++ using armadillo and OpenCV ...
Well i'm using armadillo a C++ linear algebra library to works with matrix for the algorithm

Resources