Troubleshooting optimization for finding a robot pose in equilibrium - drake

I am trying to write a program which given some cost and constraints finds a combination of states and torques which will result in a fixed point. The script can be found in this Colab.
For a bit more background, I am trying to balance a humanoid lower body with contact (1 point contact at each foot). So I use Drake's position constraint to get a pose where the feet are on the ground. The contact should be within the friction cone. And the robot should be in equilibrium. My cost function consists of the torques squared and position of the center of mass.
There are a few things going wrong with my optimization:
The SNOPT output tells me that one or multiple (I can only see the constraint with the largest discrepancy) constraints seem to have incorrect gradients. SNOPT also provides the constraint number. Is there a way to see which constraint in drake corresponds to the constraint number given by SNOPT?
Possibly related to 1, SNOPT mostly exits with 2 different error codes: SNOPT 42: Singular basis and SNOPT 43: Cannot satisfy the general constraints. Error code 43 is quite clear, but I don't know how to approach finding a solution to 42.
Lastly, I was wondering what kind of delta is usually acceptable for such an optimization. Currently, of the constraints that fail most have a delta <1E-2 which is still much too large. But maybe my constraints are a bit too strict with the default solver settings?

Is there a way to see which constraint in drake corresponds to the constraint number given by SNOPT?
Unfortunately currently we don't provide the API to see which constraint in drake corresponds to the constraint number given by SNOPT. On the other hand, you could compute the gradient of your Drake constraint numerically, and compare that with the gradient in Eval function. Drake has ComputeNumericalGradient function.
Possibly related to 1, SNOPT mostly exits with 2 different error codes: SNOPT 42: Singular basis and SNOPT 43:
Most likely this is caused by the gradient error.

Related

Drake with GurobiSolver for mixed integer programming problem

I'm trying to use Drake to solve mixed integer programming problem. One challenge is that my dynamics is nonlinear, with rotation matrix. I tried Gurobi solver to solve this problem, but it shows error like "GurobiSolver is unable to solve because a GenericConstraint was declared but is not supported." May I ask how to deal with this kind of problem in Drake with GurobiSolver?
By the way, I know one way is as this link pointed out, but using SNOPT with hard constraints doesn't produce good results. I think GurobiSolver might be better for this kind of MIQP problem.
When you use rotation matrix in your optimization problem, you will need the SO(3) constraints
RᵀR=I
Do you mean you want to impose this as a quadratic equality constraint? Theoretically Gurobi could handle this non-convex quadratic constraints (but currently not supported from Drake's interface). In practice we find that if you have several non-convex quadratic constraints, Gurobi will be quite slow.
If the only non-convex quadratic constraints in your problem is just rotation matrix SO(3) constraints, then Drake has implemented a customized mixed-integer linear/second-order-cone constraints to approximately satisfy SO(3) constraints. The code is in https://github.com/RobotLocomotion/drake/blob/eb4df7d3cde2db48c697943c43395a3f2b74e00c/solvers/mixed_integer_rotation_constraint.h#L50. One example is in https://github.com/RobotLocomotion/drake/blob/eb4df7d3cde2db48c697943c43395a3f2b74e00c/multibody/inverse_kinematics/global_inverse_kinematics.cc#L113. The math is described in our paper

Acceleration/Jerk level constraint for trajectory optimization using Direct collocation

I would like to know if there is an elegant way to impose acceleration/jerk level constraint on the trajectory optimization using the DirectCollocation class.
I am working with an Acrobot system, and I have already included the velocity level constraint, but I wanted to have a minimum/smooth jerk optimal trajectory.
Thank you in advance.
Yes. The standard direct collocation method (which is implemented in the DirectCollocation class) uses cubic splines to represent the position trajectory. If you take the second derivative, you'll be left with a first-order spline... so the acceleration is always piecewise-linear, and the jerk is always piecewise constant.
So the constraints that you would add would be simple constraints on the spline coefficients of the state trajectory. We don't offer those constraints directly in the API (but could). You could implement them (are you in python or c++) following the pattern here.
It might also help to look at the corresponding section of the course notes if you haven't.
One subtlety -- the current implementation actually represents the state trajectory as the cubic spline (it redundantly represents the positions and velocities). You could opt to add your constraints to either the position trajectory or the velocity trajectory. The constraints should be satisfied perfectly at the knots/collocation points, but the trajectories will be subtly different due to the interpolation.

Particle Swarm Optimisation: Converges to local optima too quickly in high dimension space

In a portfolio optimisation problem, I have a high dimension (n=500) space with upper and lower bounds of [0 - 5,000,000]. With PSO I am finding that the solution converges quickly to a local optima rather and have narrowed down the problem to a number of areas:
Velocity: Particle velocity rapidly decays to extremely small step sizes [0-10] in the context of the upper/lower bounds [0 - 5,000,000]. One plug I have found is that I could change the velocity update function to a binary step size [e.g. 250,000] by using a sigmoid function but this clearly is only a plug. Any recommendations on how to motivate the velocity to remain high?
Initial Feasible Solutions: When initialising 1,000 particles, I might find that only 5% are feasible solutions in the context of my constraints. I thought that I could improve the search space by re-running the initialisation until all particles start off in a feasible space but it turns out that this actually results in a worse performance and all the particles just stay stuck close to their initialisation vector.
With respect to my paremeters, w1=c1=c2=0.5. Is this likely to be the source of both problems?
I am open to any advice on this as in theory it should be a good approach to portfolio optimisation but in practice i am not seeing this.
Consider changing the parameters. Using w=0.5 'stabilizes' the particle and thus, preventing escape from local optima because it already converges. Furthermore, I would suggest to put the value of c1 and c2 to become larger than 1 (I think 2 is the suggested value), and maybe modify the value for c1 (Tendency to move toward global best) slightly smaller than c2 to prevent overcrowding on one solution.
Anyway, have you tried to do the PSO with a larger amount of particles? People usually use 100-200 particles to solve 2-10 dimensional problem. I don't think 1,000 particles in 500 dimensional space will cut it. I would also suggest to use more advanced initialization method instead of normal or uniform distribution (e.g. chaotic map, Sobol sequence, Latin Hypercube sampling).

Homography and projective transformation

im trying to write a code that will do projective transformation, but with more than 4 key points. i found this helpful guide but it uses 4 points of reference
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
i know that matlab uses has a function tcp2form that handles that, but i haven't found a way so far.
anyone can give me some guidance, on how to do so? i can solve the equations using (least squares), but i'm stuck since i have a matrix that is larger than 3*3 and i can't multiple the homogeneous coordinates.
Thanks
If you have more than four control points, you have an overdetermined system of equations. There are two possible scenarios. Either your points are all compatible with the same transformation. In that case, any four points can be used, and the rest will match the transformation exactly. At least in theory. For the sake of numeric stability you'd probably want to choose your points so that they are far from being collinear.
Or your points are not all compatible with a single projective transformation. In this case, all you can hope for is an approximation. If you want the best approximation, you'll have to be more specific about what “best” means, i.e. some kind of error measure. Measuring things in a projective setup is inherently tricky, since there are usually a lot of arbitrary decisions involved.
What you can try is fixing one matrix entry (e.g. the lower right one to 1), then writing the conditions for the remaining 8 coordinates as a system of linear equations, and performing a least squares approximation. But the choice of matrix representative (i.e. fixing one entry here) affects the least squares error measure while it has no effect on the geometric meaning, so this is a pretty arbitrary choice. If the lower right entry of the desired matrix should happen to be zero, you'd computation will run into numeric problems due to overflow.

How to implement semi-randomized level in iPhone game?

I want to create a game with a level structure similar to iCopter or Canabalt, where each level has a randomized floor (and roof), but the height of the floor is never impossible to reach from the previous one. I am also unsure on how to continually increase difficulty. I have searched far and wide for a tutorial or something like that, but I couldn't find anything. Can anyone help?
It sounds like far too specific a need to be the subject of a tutorial, to be honest. I've played Canabalt but not iCopter so I'll talk about a game like the former.
There are all sorts of calculus equations you could use to calculate acceleration and gravity to work out precisely where a platform would have to be in order to be reachable, but I suspect you will do just as well with a simpler approximation. If all your platforms are of a minimum length, then you can make an assumption on the speed that it's reasonable to expect a player to be able to reach by the time they get to the end. That, in combination with however long your jump algorithm keeps someone in the air, dictates the maximum distance that another platform of the same height could possibly be and still be reachable.
The highest platform you can reach is usually dictated by your jump algorithm - that could be a constant height, or it could be proportional to the speed, but either way you can easily estimate the highest reasonable jump you can make at the end of any given platform. This gives you a maximum relative height that you can reach from there.
Assuming your physics are fairly realistic and you apply a constant downwards force while the player is in the air, the apex of the jump will be at around the half-way point. So a platform that is the maximum attainable height relative to the player needs to be half as distant as one on the same level would be. And to find reasonable relative height and distance combinations in between, you can linearly interpolate.
Platforms below you are obviously more lenient - they can be further away than one on the same level, again by a distance roughly proportional to the speed you're travelling.
A simple algorithm then would be to pick, at each stage, either a higher or lower platform, pick a relative height from within the attainable bounds, then find the distance it needs to be at.
To adjust difficulty, you can start with these relative height and distance values above, which represent the extremes of what is possible, and reduce them by a proportion to make the jumps easier to complete. I might start with 50% reductions, +/-10% (randomised) to provide a few tougher jumps. Then as the game progresses, I'd slowly ramp that 50% down towards 0, so the player has less and less margin for error.
EDIT: Since I posted this answer, I found another interesting source which may be of use: A Probabilistic Multi-Pass Level Generator. Although the game in question is different (one of the Mario games I don't recognise) many of the principles are similar, in terms of placing platforms at reasonable heights and distances. Java code is provided.
I'm not sure how comfortable you are with math, physics, etc. but, in my opinion, this is a pretty simple solution:
Using a formula to determine if a launched ball will clear a fence in the distance is a reasonable way to find an arc defining the possible farthest points the next platform could be. It's a standard formula you learn in physics when studying projectile motion. There's a fairly interactive example here that includes the equation.
I'd recommend determining the position for your next platform like this:
Randomly choose a horizontal distance X from the end of the current platform to the beginning of the next platform (determine a reasonable range for X however you want).
Use the fence problem to find a maximum value for height Y to make the platform reachable.
You may need to subtract a small amount from the maximum height to ensure the platform can be reached depending on how you have things implemented.
Choose a height Y that is no higher than the maximum (remember that you can and should allow negative values for Y).
Place the next platform past the current one at distance X and height Y

Resources