We tried solving a static equilibrium problem between two boxes:
static_equilibrium_problem = StaticEquilibriumProblem(autodiff_plant, autodiff_plant.GetMyContextFromRoot(autodiff_context), set())
result = Solve(static_equilibrium_problem.prog())
And got this error:
RuntimeError: Signed distance queries between shapes 'Box' and 'Box' are not supported for scalar type drake::AutoDiffXd
Is there more information about why this doesn't work, and how to extend the Static Equilibrium Problem to more general boxes and even meshes?
My guess is the SDF collision query between boxes is not differentiable for some reason, although it works for spheres: https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_static_equilibrium_problem.html
The primary reason is the discontinuity in the derivatives. We haven't decided what we want to do about it. While SDF is continuous, it's gradient isn't inside the box. I'd recommend posting an issue on Drake with your use case, and what kind of results you'd expect given the mathematical properties of SDF for non-smooth geometries (e.g., boxes, general meshes, etc.). The input will help us make decisions and implicitly increase the importance of resolving the known open issue. It may be that you have no problems with the gradient discontinuity.
You'll note this behavior has been documented for the query. However, you'll note for the mathematically related ComputePointPairPenetration() method, we have additional support for AutoDiffXd (as documented here).
But the issue is your best path forward -- we've introduced some of this functionality based on demonstrable need; you seem to have that.
Related
I'm working on a problem that requires searching for some unique 3x3 patterns in binary images. My current method is to do a convolution with a kernel where each value is a different power of two, essentially producing a 9-bit number for each pixel. This is working for me, and I can search for my patterns quickly by simply checking for the corresponding numbers.
I have a couple questions:
Is there a name for this kernel or method? I cannot find any
reference to one like it, but I don't exactly know what to call it.
Is there another way to go about this? I get skeptical of my methods when I don't see anyone else doing it :)
I have been searching on whether z3 supports complex numbers and have found the following: https://leodemoura.github.io/blog/2013/01/26/complex.html
The author states that (1) Complex numbers are not yet implemented in Z3 as a built-in (this was written in 2013), and (2) that Complex numbers can be encoded on top of the Real numbers provided by Z3.
The basic idea is to represent a Complex number as a pair of Real numbers. He defines the basic imaginary number with I=(0,1), that is: I means the real part equals 0 and the imaginary part equals 1.
He offers the encoding (I mean, we can test it on our machines), where we can solve the equation x^2+2=0. I received the following result:
sat
x = (-1.4142135623?)*I
The sat result does make sense, since this equation is solvable in the simulation of the theory of complex numbers (as a result of the theory of algebraically closed fields) we have just made. However, the root result does not make sense to me. I mean: what about (1.4142135623?)*I?
I would understand to receive the two roots, but, if only one received, I do not understand why I get the negated solution.
Maybe I misread something or I missed something.
Also, I would like to say if complex numbers have already been implemented built in Z3. I mean, with a standard:
x = Complex("x")
And with tactics of kind of a NCA (from nonlinear complex arithmetic).
I have not seen any reference to this theory in SMT-LIB either.
AFAIK there is no plan to add complex numbers to SMT-LIB. There's a Google group for SMT-LIB and it might make sense to send a post there to see if there is any interest there.
Note, that particular blog post says "find a root"; this is just satisfiability, i.e. it finds one solution, not all of them. (But you can ask for another one by adding an assertion that says x should be different from the first result.)
The documentation says this is similar to GL_MIRRORED_REPEAT. I tried to research this, but it doesn't seem as specific as the OpenCV border types.
BORDER_REFLECT_101 as gfedcb|abcdefgh|gfedcba, this is the default.
BORDER_REFLECT as fedcba|abcdefgh|hgfedcb
I guess the corners are not strictly defined by this, but I can clearly see what the edges are. The documentation for GL_MIRRORED_REPEAT seems to focus on corner behaviour. Overall, it does not matter with our application as there are physical limitations on the targets of interest that keep them within the bounds of the field of view. However, if I am writing regression tests and these specifics matter.
How can I replicate BORDER_REFLECT_101 in Halide? Is it possible with Halide::BoundaryConditions or do I need to implement my own clamping? I can relax the conditions after proving we have replicated behaviour and use Halide::BoundaryConditions::mirror_image.
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
The boundary conditions are just a convenience. They're implemented here. They should be no more or less performant than writing the same yourself since they're just metaprogramming Exprs (i.e. they aren't compiler intrinsics).
I've been looking into some papers on inverse kinematics planning with collision avoidance for a robotic arm. I found this paper called, "Global Inverse Kinematics via Mixed-Integer Convex Optimization", which I believe a few of Drake's developer's names are on.
On this paper, it says that the code is already part of drake. However, when I found it, the code was placed in attic. (https://github.com/RobotLocomotion/drake/blob/master/attic/multibody/global_inverse_kinematics.h)
In the current kuka arm example under move_iiwa_ee, the IK code called ConstraintRelaxingIk is used instead.
I would like to know why the "Global Inverse Kinematics via Mixed-Integer Convex Optimization" IK code was put to the side and if there were any problems with it.
Also, I'm not sure if the current ConstraintRelaxingIk does collision avoidance. Could someone clarify this?
(And if it does, are there any extra steps to take other than inserting an obstacle model?)
Thank you
Thanks for your interest in our work.
The global inverse kinematics code is in attic folder because it uses RigidBodyTree, which will be deprecated soon. I haven't got time to rewrite the code to use the new MultibodyTree yet. For the moment, if you would like to try the approach in the global IK paper, could you use the code in the attic folder? I will try to move the code out of attic folder to use MultibodyTree. (Update The code is already moved out of attic folder. Now it is in inverse_kinematics folder)
ConstraintRelaxingIk doesn't handle collision avoidance. On the other hand, you can use InverseKinematics class. It formulates IK as a nonlinear optimization problem, and calls the nonlinear solver to find the solution. Specifically for collision avoidance, you could use AddMinimumDistanceConstraint to impose a minimal distance constraint between all pairs of geometries. Alternatively, you could call AddDistanceConstraint if you want to impose a distance constraint between a specific pair of geometries. I would actually recommend trying InverseKinematics class first before you try the global IK approach, since InverseKinematics uses MultibodyPlant, and usually runs faster than the global approach.
BTW, the formulation used in InverseKinematics class is described in section II.B and II.C of this paper
If my title is incorrect/could be better, please let me know.
I've been trying to find an existing paper/article describing the problem that I'm having: I'm trying to create vectors for words so that they are equal to the sum of their parts.
For example: Cardinal(the bird) would be equal to the vectors of: red, bird, and ONLY that.
In order to train such a model, the input might be something like a dictionary, where each word is defined by it's attributes.
Something like:
Cardinal: bird, red, ....
Bluebird: blue, bird,....
Bird: warm-blooded, wings, beak, two eyes, claws....
Wings: Bone, feather....
So in this instance, each word-vector is equal to the sum of the word-vector of its parts, and so on.
I understand that in the original word2vec, semantic distance was preserved, such that Vec(Madrid)-Vec(Spain)+Vec(Paris) = approx Vec(Paris).
Thanks!
PS: Also, if it's possible, new words should be able to be added later on.
If you're going to be building a dictionary of the components you want, you don't really need word2vec at all. You've already defined the dimensions you want specified: just use them, e.g. in Python:
kb = {"wings": {"bone", "feather"},
"bird": {"wings", "warm-blooded", ...}, ...}
Since the values are sets, you can do set intersection:
kb["bird"] | kb["reptile"]
You'll need to do find some ways decompose the elements recursively for comparisons, simplifications, etc. These are decisions you'll have to make based on what you expect to happen during such operations.
This sort of manual dictionary development is quite an old fashioned approach. Folks like Schank and Abelson used to do stuff like this in the 1970's. The problem is, as these dictionaries get more complex, they become intractable to maintain and more inaccurate in their approximations. You're welcome to try as an exercise---it can be kind of fun!---but keep your expectations low.
You'll also find aspects of meaning lost in these sorts of decompositions. One of word2vec's remarkable properties is its sensitives to the gestalt of words---words may have meaning that is composed of parts, but there's a piece in that composition that makes the whole greater than the sum of the parts. In a decomposition, the gestalt is lost.
Rather than trying to build a dictionary, you might be best off exploring what W2V gives you anyway, from a large corpus, and seeing how you can leverage that information to your advantage. The linguistics of what exactly W2V renders from text aren't wholly understood, but in trying to do something specific with the embeddings, you might learn something new about language.