I'm trying to ensure that a multi-linked robot doesn't collide with an obstacle while doing trajectory optimization. Would I be able to use ComputeSignedDistancePairClosestPoints within a MathematicalProgram constraint to do this?
For example, if I wanted to add a constraint that my end effector did not collide with a circular geometry, could I add a constraint to all knot points in the trajectory optimization that ComputeSignedDistancePairClosestPoints is greater than some distance? My original thought was to use forward kinematics to calculate the end effector position in world coordinates in terms of the state variables (joint angles), then find the distance from this point to the obstacle and add a constraint that the distance must be greater than the radius of the obstacle.
You could try to use the constraint DistanceConstraint which imposes that the distance between a specific pair of geometries is above certain threshold. Alternatively you can try MinimumDistanceConstraint which says the distance among any pair of geometries is above certain threshold.
In pydrake, you can do
from pydrake.multibody.inverse_kinematics import DistanceConstraint
prog.AddConstraint(DistanceConstraint(plant, [geometry_id1, geometry_id2], plant_context, distance_lower, distance_upper), q)
Related
I have a question about inverseKinTraj class.
It seems to return qsol as well as qdotsol and qddotsol and I would like to add velocity constraint on the time step.
For example, I would like to add A(q(t))*qdot(t) = Constant and (q(t+1)-q(t))/dt=qdot(t+1). Is there any way I could impose velocity level constraints on the inverse kinematic trajectory?
inverseKinTraj doesn't offer the API for constraint A(q(t)) * qdot(t) = constant.
Moreover, inverseKinTraj assumes that the trajectory is a piecewise cubic spline (namely it has continuous joint velocity/acceleration), so we cannot do the backward Euler integration (q(t+1) - q(t)) / dt = qdot(t+1), which doesn't give a continuous joint velocity/acceleration.
What is the math that SCNLookAtConstraint is doing? I want to try to recreate this with vectors.
I think it can be done with a cross product and a dot product once you have the two directional vectors.
By default the node points in the direction of the negative z-axis of its local coordinate system.
The other direction we are interested in is from the node that looks to the other node, in the node that looks's local coordinate system. You can get it by converting the positions using convertPosition:fromNode: or
convertPosition:toNode:.
If not done already, normalize the two directional vectors.
With the two directions in the local coordinate system, a cross product between the two gives a vector that is orthogonal to the plane that can be formed between the two directions. This vector is the surface normal to that plane. Any rotation around that normal is going to be another vector that remains in the plane.
Since the two directions are normalized, a dot product of the two should give you cos(ϴ), where ϴ is the angle between the two.
Rotating the first vector (the one that points in the direction of the negative z-axis) by this angle around the normal to the plane should make it point in the same direction as the second vector (that one that points at the other node).
That should be the way it's done for two vectors (or at least one way to do it).
To do it for a node, you would set a rotation of that angle around that axis, to the node that is looking. This would rotate the node so that it's local negative z-axis (the direction it's looking) would point at the other node.
I have a very similar example in one of the chapters for 3D Graphics with Scene Kit, where a node is rotated to point straight out of the surface of a sphere. You can look at the sample code to see how it's solved there.
I know that you can define a sprite's mass and density with SpriteKit. If I had 2 sprites laying on top of a 3rd sprite, like a scale, is there a way to measure the total weight that is lying on the 3rd node?
Sure. Just define your own gravity in the z direction, multiply it by the mass of the node on top, and you've got its weight. No need to take density into account (it's not like my scale says I weigh any different if I stand on it or lay down on it). Pressure is force/area, so just divide the weight by the area if you want to get that.
Given a point on a plane A, I want to be able to map to its corresponding point on plane B. I have a set of N corresponding pairs of reference points between the two planes, however, the overall mapping is not a simple affine transform (no homographies for me).
Things I have tried:
For a given point, find the three closest reference points in plane A, compute barrycentric coordinates of that triangle, and then apply that transform to the corresponding reference points in plane B. How it failed: sometimes the three closest points were nearly collinear, so errors were huge. Also, there was no consistency in the mapping when crossing borders. It was very "jittery."
Compute all possible triangles given the N reference points (N^3). Order them by size. For the given point, find the smallest triangle that it's in. This fixes the linearly of the
points problem, but was still extremely jittery and slow.
Start with a triangulated plane A. Iterate through the reference points, adding each one to the reference plane. Every time you add a point it exists in at least one triangle. Break that triangle into three triangles using the new reference point as a vertex. You end up with plane A triangulated so you can map from plane A to plane B with ease. Issues: You can prove that every triangle will have a point that is on the edge of the planes. This results in huge errors if your reference points are far from the edge of the planes.
I feel like this should be a fairly standard problem. Are there standard algorithms/libraries for this?
There you go my friend.. I have used it myslef and can only recommend you give it a try.
Kahn Academy - Matrix transformations
Understanding how we can map one set of vectors to another set. Matrices used to define linear transformations
https://www.khanacademy.org/math/linear-algebra/matrix_transformations
I have to detect the pattern of 6 circles using opencv. I have detected the circles and their centroids by using thresholding and contour function in opencv.
Now I have to define the relation between these circles in a way that should be invariant to scale and rotation. With this I would be able to detect this pattern in various views. I have to use this pattern for determining the object pose.
How can I achieve scale/rotation invariance? Do you have any reference I could read about it?
To make your pattern invariant toward rotation & scale, you have to normalize the direction and the scale when detecting your pattern. Here is a simple algorithm to achieve this
detect centers and circle size (you say you have already achieved this - good!)
compute the average center using a simple mean. Express all the centers from this mean
find the farthest center using a simple norm (euclidian is good enough)
scale the center position and the circle sizes so that this maximum distance is 1.0
rotate the centers so that coordinates of the farthest one is (1.0, 0)
you're done. You are now the proud owner of a scale/rotation invariant pattern detector!! Congratulations!
Now you can find patterns, transform them as suggested, and compare center position & circle sizes.
It is not entirely clear to me if you need to find the rotation, or merely get rid of it, or detect if the circles actually form the pattern you linked. Either way, the answer is much the same.
I would start by finding the two circles that have only one neighbour. For each circle centroid calculate the distance to the closest two neighbours. If the distances differ in more than say 10%, the centroid belongs to an "end" circle (one of the top ones in your link).
Now that you have found the two end circles, rotate them so that they are horizontal to each other. If the other centroids are now above them, rotate another 180 degrees so that the pattern ends up in the orientation you want.
Now you can calculate the scaling from the average inter-centroid distance.
Hope that helps.
Your question sounds exactly like what the SURF algorithm does. It finds groups of interest and groups them together in a way invarant to rotation and scale, and can find the same object in other pictures.
Just search for OpenCV and SURF.