I've been looking into some papers on inverse kinematics planning with collision avoidance for a robotic arm. I found this paper called, "Global Inverse Kinematics via Mixed-Integer Convex Optimization", which I believe a few of Drake's developer's names are on.
On this paper, it says that the code is already part of drake. However, when I found it, the code was placed in attic. (https://github.com/RobotLocomotion/drake/blob/master/attic/multibody/global_inverse_kinematics.h)
In the current kuka arm example under move_iiwa_ee, the IK code called ConstraintRelaxingIk is used instead.
I would like to know why the "Global Inverse Kinematics via Mixed-Integer Convex Optimization" IK code was put to the side and if there were any problems with it.
Also, I'm not sure if the current ConstraintRelaxingIk does collision avoidance. Could someone clarify this?
(And if it does, are there any extra steps to take other than inserting an obstacle model?)
Thank you
Thanks for your interest in our work.
The global inverse kinematics code is in attic folder because it uses RigidBodyTree, which will be deprecated soon. I haven't got time to rewrite the code to use the new MultibodyTree yet. For the moment, if you would like to try the approach in the global IK paper, could you use the code in the attic folder? I will try to move the code out of attic folder to use MultibodyTree. (Update The code is already moved out of attic folder. Now it is in inverse_kinematics folder)
ConstraintRelaxingIk doesn't handle collision avoidance. On the other hand, you can use InverseKinematics class. It formulates IK as a nonlinear optimization problem, and calls the nonlinear solver to find the solution. Specifically for collision avoidance, you could use AddMinimumDistanceConstraint to impose a minimal distance constraint between all pairs of geometries. Alternatively, you could call AddDistanceConstraint if you want to impose a distance constraint between a specific pair of geometries. I would actually recommend trying InverseKinematics class first before you try the global IK approach, since InverseKinematics uses MultibodyPlant, and usually runs faster than the global approach.
BTW, the formulation used in InverseKinematics class is described in section II.B and II.C of this paper
Related
We tried solving a static equilibrium problem between two boxes:
static_equilibrium_problem = StaticEquilibriumProblem(autodiff_plant, autodiff_plant.GetMyContextFromRoot(autodiff_context), set())
result = Solve(static_equilibrium_problem.prog())
And got this error:
RuntimeError: Signed distance queries between shapes 'Box' and 'Box' are not supported for scalar type drake::AutoDiffXd
Is there more information about why this doesn't work, and how to extend the Static Equilibrium Problem to more general boxes and even meshes?
My guess is the SDF collision query between boxes is not differentiable for some reason, although it works for spheres: https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_static_equilibrium_problem.html
The primary reason is the discontinuity in the derivatives. We haven't decided what we want to do about it. While SDF is continuous, it's gradient isn't inside the box. I'd recommend posting an issue on Drake with your use case, and what kind of results you'd expect given the mathematical properties of SDF for non-smooth geometries (e.g., boxes, general meshes, etc.). The input will help us make decisions and implicitly increase the importance of resolving the known open issue. It may be that you have no problems with the gradient discontinuity.
You'll note this behavior has been documented for the query. However, you'll note for the mathematically related ComputePointPairPenetration() method, we have additional support for AutoDiffXd (as documented here).
But the issue is your best path forward -- we've introduced some of this functionality based on demonstrable need; you seem to have that.
The documentation says this is similar to GL_MIRRORED_REPEAT. I tried to research this, but it doesn't seem as specific as the OpenCV border types.
BORDER_REFLECT_101 as gfedcb|abcdefgh|gfedcba, this is the default.
BORDER_REFLECT as fedcba|abcdefgh|hgfedcb
I guess the corners are not strictly defined by this, but I can clearly see what the edges are. The documentation for GL_MIRRORED_REPEAT seems to focus on corner behaviour. Overall, it does not matter with our application as there are physical limitations on the targets of interest that keep them within the bounds of the field of view. However, if I am writing regression tests and these specifics matter.
How can I replicate BORDER_REFLECT_101 in Halide? Is it possible with Halide::BoundaryConditions or do I need to implement my own clamping? I can relax the conditions after proving we have replicated behaviour and use Halide::BoundaryConditions::mirror_image.
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
The boundary conditions are just a convenience. They're implemented here. They should be no more or less performant than writing the same yourself since they're just metaprogramming Exprs (i.e. they aren't compiler intrinsics).
I am using the python API of the Z3 solver to search for optimized schedules. It works pretty well apart from that it sometimes is very slow even for small graphs (sometimes its very quick though). The reason for that is probably that the constraints of my scheduling problem are quite complex.
I am trying to speed things up and stumbled on some articles about incremental solving.
As far I understood, you can use incremental solving to prune some of the search space by only applying parts of the constraints.
So my original code was looking like that:
for constraint in constraint_set:
self._opt_solver.add(constraint)
self._opt_solver.minimize(some_objective)
self._opt_solver.check()
model = self._opt_solver.mode()
I changed it now to the following:
for constraint in constraint_set:
self._opt_solver.push(constraint)
self._opt_solver.check()
self._opt_solver.minimize(some_objective)
self._opt_solver.check()
model = self._opt_solver.mode()
I basically substituted the "add" command by the "push" command and added a check() after each push.
So first of all: is my general approach correct?
Furthermore, I get an exception which I can't get rid of:
self._opt_solver.push(constraint) TypeError: push() takes 1 positional argument but 2 were given
Can anyone give me a hint, what I am doing wrong. Also is there maybe a z3py tutorial that explains (with some examples maybe) how to use incremental solving with the python api.
My last question is: Is that at all the right way of minimizing the execution time of the solver or is there a different/better way?
The function push doesn't take an argument. It creates a "backtracking" point that you can pop to later on. See here: http://z3prover.github.io/api/html/classz3py_1_1_solver.html#abc4ae989afee7ad164844640537107d9
So, it seems push isn't really what you want/need here at all. You should simply add your constraints one-by-one and call check. However, I very much doubt checking after each addition is going to speed anything up significantly. The optimizing solver (as opposed to the regular one), in particular, usually solves everything from scratch. (See the relevant discussion here: https://github.com/Z3Prover/z3/issues/1577)
Regarding incremental: The python API is automatically "incremental." Incremental simply means the ability to call the command check() multiple times, without the solver forgetting what it has seen before. (i.e., call check, assert more facts, call check again; the second check will take into account all the assertions from the very beginning.) You shouldn't make any assumptions regarding this will give you speed over calling check just once at the very end: It entirely depends on the heuristics and the decision procedures involved, which is dependent on the problem at hand.
I was studying Syntax Directed Definition from the "Compilers: Principles, Techniques and Tools by Aho, Ullman, Sethi and Lam" when I came across the following line in context of circular dependencies of attributes in parse tree:
It is computationally difficult to determine whether or not there exist any circularities in any of the parse trees that a given SDD could have to translate.
(section: 5.1.2)
Also in http://cs.nyu.edu/courses/fall12/CSCI-GA.2130-001/lecture8.pdf it is mentioned that
Detecting cycles has exponential time complexity
But there is Tarjan's strongly connected components algorithm that can find cycles in O(E+V). So why are the above sources contradicting this?
Can anyone tell me what I am missing?
It's O(E+V) to find a cycle in some parse tree. But that's not the problem. The problem is to
determine whether or not there exist any circularities in any of the parse trees that a given SDD could have to translate. (Emphasis added).
That's a rather more difficult problem.
I'm in a situation where I have a number of paths on the screen, updated several times per seconds. they are extremely simple line paths, each of them is just a simple line on canvas.
I need an efficient way of updating the paths. At the moment, I'm retrieving the path string for each of them, add 'L xx xx', and redraw. It's fine with a small number of lines, but the performance is really bad once the frequency (or the number of paths) increases.
so, the real question is - does Raphael provide a method that would just 'add a point to the path'?
I'm very new to vectors, not to mention Raphael and svg.
Would be grateful for any help
Thanks
K
I wonder what your doing ;)
Try just Maintenon the updated path and use the "path" attribute on the path
I would be interested to hear if the performance improves
Also you might like to visit my site and play with the demo there for lovely rounded paths
http://irunmywebsite.com/raphael/additionalhelp.php?v=2
Look at the technique Catmull Rom Curves it might inspire...
EXCUSE BARE LINKS AND SPELLING iPod!