I am using SOS optimization to solve an adaptive control problem using the inverse Lyapunov method. I have been successful in obtaining the Lyapunov function and region of attraction level-set for some simple problems. Now, I am trying to determine the Lyapunov for a new system. I am getting the error "Constraint ### is empty.", where ### is a number that changes randomly. How do I debug which constraint is empty? My constraints look like the following:
prog.AddSosConstraint( V-l1 )
prog.AddSosConstraint( -((beta-h)*p1 + V-1) )
prog.AddSosConstraint( -(l2+Vdot) + p2*(V-1))
p1 and p2 have the decision variables. V, l1, and l2, are functions for the indeterminants only.
I am following the iterative procedure in [1] to solve for the Lyapunov function and region of attraction level-set.
[1] F. Meng, D. Wang, P. Yang, G. Xie and F. Guo, "Application of Sum-of-Squares Method in Estimation of Region of Attraction for Nonlinear Polynomial Systems," in IEEE Access, vol. 8, pp. 14234-14243, 2020, doi: 10.1109/ACCESS.2020.2966566.
The problem here seems to be that when Drake passes the program along to the CSDP library to solve, CDSP rejects the program as malformed. Ideally, we would like Drake to detect this and report back to you, without handing it over to CSDP to fail.
It's possible that this is a similar bug to #16732.
It's possible that the debug_mathematical_program tutorial would offer some tips for debugging. In particular, the "print a summary" might let you see anything suspicious (likely), or "naming your constraints" might also help (though probably not).
In any case, if you are able to provide sample code that reproduces the error, then we will able to offer better advice.
Related
I'm trying to use Drake's inverse dyanmics controller on an arm with a floating base, and based on this discussion it seems like the most straightforward way to go about this is to use two separate plants since the controller only supports fully actuated systems.
Following Python bindings error when adding two plants to a scene graph in pyDrake, I attempted to create two plants using the following code:
def register_plant_with_scene_graph(scene_graph, plant):
plant.RegsterAsSourceForSceneGraph(scene_graph)
builder.Connect(
plant.get_geometry_poses_output_port(),
scene_graph.get_source_pose_port(plant.get_source_id()),
)
builder.Connect(
scene_graph.get_query_output_port(),
plant.get_geometry_query_input_port(),
)
builder = DiagramBuilder()
scene_graph = builder.AddSystem(SceneGraph())
plant_1 = builder.AddSystem(MultibodyPlant(time_step=0.0))
register_plant_with_scene_graph(scene_graph, plant_1)
plant_2 = builder.AddSystem(MultibodyPlant(time_step=0.0))
register_plant_with_scene_graph(scene_graph, plant_2)
which produced the error
AttributeError: 'MultibodyPlant_[float]' object has no attribute 'RegsterAsSourceForSceneGraph'
Which seems odd because according to the documentation, the function should exist.
Is this function available in the python bindings for drake? Also, more broadly, is this the correct way to approach using the inverse dynamics controller on a free-floating manipulator?
Inverse dynamics takes desired positions, velocities, and accelerations and computes the required torques. If your robot has a floating base, then you cannot accept arbitrary acceleration commands. For instance the total center of mass of your robot will be falling according to gravity; any acceleration that does not satisfy this requirement will not have a feasible solution to the inverse dynamics. I think there must be something more that we need to understand about your problem formulation.
Often when people ask this question, they are thinking of a robot that is relying on contact forces in addition to generalized force/torques in order to achieve the requested accelerations. In that case, the problem needs to include those contact forces as decision variables, too. Since contact forces have unilateral constraints (e.g. feet cannot pull on the ground), and friction cone constraints, this inverse dynamics problem is almost always formulated as a quadratic program. For instance, as in this paper. We don't currently provide that QP formulation in Drake, but it is not hard to write it against the MathematicalProgram interface. And we do have some older code that was removed from Drake (since it wasn't actively developed) that we can point you to if it helps.
I am fitting a Spatial Error Model using the errorsarlm() function in the spdep library.
The Breusch-Pagan test for spatial models, calculated using the bptest.sarlm() function, suggest the presence of heteroskedasticity.
A natural next step would be to get the robust standard error estimates and update the p-values. In the documentation of the bptest.sarlm() function says the following:
"It is also technically possible to make heteroskedasticity corrections to standard error estimates by using the “lm.target” component of sarlm objects - using functions in the lmtest and sandwich packages."
and the following code (as reference) is presented:
lm.target <- lm(error.col$tary ~ error.col$tarX - 1)
if (require(lmtest) && require(sandwich)) {
print(coeftest(lm.target, vcov=vcovHC(lm.target, type="HC0"), df=Inf))}
where error.col is the spatial error model estimated.
Now, I can easily adapt the code to my problem and get the robust standard errors.
Nevertheless, I was wondering:
What exactly is the “lm.target” component of sarlm objects? I can not find any mention to it in the spdep documentation.
What exactly are $tary and $tarX? Again, it does not seem to be mentioned on the documentation.
Why documentation says it is "technically possible to make heteroskedasticity corrections"? Does it mean that proposed approach is not really recommended to overcome issues of heteroskedasticity?
I report this issue on github and had a response by Roger Bivand:
No, the approach is not recommended at all. Either use sphet or a Bayesian approach giving the marginal posterior distribution. I'll drop the confusing documentation. tary is $y - \rho W y$ and similarly for tarX in the spatial error model case. Note that tary etc. only occur in spdep in documentation for localmoran.exact() and localmoran.sad(); were you using out of date package versions?
I am trying to solve a MathematicalProgram with an SOSPolynomial. I am running Drake in C++ compiled from source with Mosek.
The MathematicalProgram contains a quadratic cost function and some equality constraints, which works fine when calling Solve() without adding the SOS polynomials. When looking at result.get_solver_id(), I find: "Equality constrained QP", as expected.
However, upon calling Solve(), after adding an SOS polynomial through prog.NewSosPolynomial({t}, degree) (with t being a decision variable) the program returns that a solution could not be found. When looking at the value found in result_.get_solution_result(), I find solution_status = false and rescode = 1501.
Looking here, rescode = 1501 means: "The problem contains nonlinear terms conic constraints. The requested operation cannot be applied to this type of problem.". However, by checking the value of result.get_solver_id() before adding the SOS Polynomial, it is clear that there are no other nonlinear constraints in the problem.
Am I missing something here, or is this a bug?
Interesting. The standard form for a semi-definite program (which results from our SOS constraint) only accepts linear objectives, not quadratic objectives. This does not result in any loss of generality, because you can use a slack variable. Can you try the following:
Right now you have something like
min x'Qx
s.t. Ax=b, p(t) is SOS.
Can you write it instead as
min a
s.t. Ax=b, p(t) is SOS, x'Qx <= a
but add the x'Qx <= a using AddLorenzConeConstraint? (Note: looks like you might actually use x'Qx <= a^2 and a >= 0).
I'm new to ML and Kaggle. I was going through the solution of a Kaggle Challenge.
Challenge: https://www.kaggle.com/c/trackml-particle-identification
Solution: https://www.kaggle.com/outrunner/trackml-2-solution-example
While going through the code, I noticed that the author has used only train_1 file (not train_2, 3, …).
I know there is some strategy involved behind using only the train_1 file. Can someone, please, explain why is it so? Also, what are the use of blacklist_training.zip, train_sample.zip, and detectors.zip files?
I'm one of the organiser of the challenge. train_1 2 3 .. files are all equivalent. Outrunner has probably seen there was no improvement using more data.
train_sample.zip is a small dataset equivalent to train_1 2 3... provided for convenience.
blacklist_training.zip is a list of particles to be ignored due to a small bug in the simulator (not very important).
detectors.zip is the list of the geometrical surfaces where the x y z measurements are made.
David
I saw in a previous post from last August that Z3 did not support optimizations.
However it also stated that the developers are planning to add such support.
I could not find anything in the source to suggest this has happened.
Can anyone tell me if my assumption that there is no support is correct or was it added but I somehow missed it?
Thanks,
Omer
If your optimization has an integer valued objective function, one approach that works reasonably well is to run a binary search for the optimal value. Suppose you're solving the set of constraints C(x,y,z), maximizing the objective function f(x,y,z).
Find an arbitrary solution (x0, y0, z0) to C(x,y,z).
Compute f0 = f(x0, y0, z0). This will be your first lower bound.
As long as you don't know any upper-bound on the objective value, try to solve the constraints C(x,y,z) ∧ f(x,y,z) > 2 * L, where L is your best lower bound (initially, f0, then whatever you found that was better).
Once you have both an upper and a lower bound, apply binary search: solve C(x,y,z) ∧ 2 * f(x,y,z) > (U - L). If the formula is satisfiable, you can compute a new lower bound using the model. If it is unsatisfiable, (U - L) / 2 is a new upper-bound.
Step 3. will not terminate if your problem does not admit a maximum, so you may want to bound it if you are not sure it does.
You should of course use push and pop to solve the succession of problems incrementally. You'll additionally need the ability to extract models for intermediate steps and to evaluate f on them.
We have used this approach in our work on Kaplan with reasonable success.
Z3 currently does not support optimization. This is on the TODO list, but it has not been implemented yet. The following slide decks describe the approach that will be used in Z3:
Exact nonlinear optimization on demand
Computation in Real Closed Infinitesimal and Transcendental Extensions of the Rationals
The library for computing with infinitesimals has already been implemented, and is available in the unstable (work-in-progress) branch, and online at rise4fun.