I would like to know if there is an elegant way to impose acceleration/jerk level constraint on the trajectory optimization using the DirectCollocation class.
I am working with an Acrobot system, and I have already included the velocity level constraint, but I wanted to have a minimum/smooth jerk optimal trajectory.
Thank you in advance.
Yes. The standard direct collocation method (which is implemented in the DirectCollocation class) uses cubic splines to represent the position trajectory. If you take the second derivative, you'll be left with a first-order spline... so the acceleration is always piecewise-linear, and the jerk is always piecewise constant.
So the constraints that you would add would be simple constraints on the spline coefficients of the state trajectory. We don't offer those constraints directly in the API (but could). You could implement them (are you in python or c++) following the pattern here.
It might also help to look at the corresponding section of the course notes if you haven't.
One subtlety -- the current implementation actually represents the state trajectory as the cubic spline (it redundantly represents the positions and velocities). You could opt to add your constraints to either the position trajectory or the velocity trajectory. The constraints should be satisfied perfectly at the knots/collocation points, but the trajectories will be subtly different due to the interpolation.
Related
An issue arises with contact modeling in robotics simulation package drake.
I try to position-control an iiwa manipulator to affect a body, attached to a screw joint.
What I expect is the nut progressing downwards. What I see is the end-effector sliding around the nut, unable to induce a nuts rotation along the bolt.
note: This is a continued investigation into this question.
The simplifying experiment is to use ExternallyAppliedSpatialForce, instead of the manipulator.
I apply torque around the bolts's Z axis in the centre of bolts axis. The amount of torque is 1 Nm. And, this produces the desired effect:
Then, I stop artificial force application, and try manipulation again, combined with the contact visualization by ContactResultsToMeshcat:
I also print the contact force magnitudes. Forces are in the range of 50-200 N.
But the nut does not move.
My question is, where to take the investigation next, when the 1 Nm torque is able to turn a screw joint, while the 200N force cannot?
So far, after watching 10 repeats of the visualization, I notice that only one finger of end-effector is in contact at any given moment. So, I think, I must prepare less sloppy trajectories for the fingers of end effector [and for the whole arm, for that matter].
It could be the inverse dynamics controller used in ManipulationStation, which is different from (and softer than) iiwa’s real controller, that’s making the robot’s joints too soft and unable to turn the nut.
Specifically, the inverse dynamics controller scales the joint stiffness matrix by the mass matrix of the robot, whereas iiwa's real controller does not do that.
The problem most likely was in having actuation port of bolt-and-nut model not connected to any input port and hence not simulated properly (more details here).
After fix the simulation looks that way:
I am building a two link system. The parent link is a curved object and the second link has a single degree of freedom to follow the curved length. I attempted to implement this with a prismatic joint but I can only define a linear path. How can I model this system in drake?
You could use a joint with more degrees of freedom and then use a controller (e.g. PD) to force it to track the curve you want. If you really only need 1 dof you could use two prismatic joints; if the object needs also to be able to rotate you could start with a planar joint (two translations and a rotation) and use the controller just to constrain one of the dofs.
It is also possible to create a new joint type that would have a single dof and follow a curved slot. That would require some engineering though. The basic theory is covered in this paper which Drake follows closely.
I'm trying to port trajectory optimization code originally written for TrajOpt into Drake. One of the nice things about TrajOpt is that it could solve SQP trajectory optimization problems with a constraint enforcing a minimum distance between the robot and surrounding obstacles, and it supported a pretty broad range of geometries (all the standard convex primitives plus simple convex meshes). For a number of reasons, TrajOpt is no longer the right choice for my project, so I'm porting my trajectory optimization code over to Drake. I think MinimumDistanceConstraint is what I want to replicate this functionality, but it seems that Drake allows AutoDiffXd signed distance queries only for spheres and half-spaces, not for more general convex shapes (like boxes or cylinders).
All of my other constraints support AutoDiff (I have some custom constraints for "probability of collision," but those provide an analytical derivative that can be used in an AutoDiff). In order to add a MinimumDistanceConstraint that supports more general geometry, would I have to formulate the MathematicalProgram entirely with doubles? Would that slow down the performance of the solver (e.g. by having to do finite differences instead of using the gradient information in AutoDiffXd)?
In an ideal world, I'd like to avoid resorting to "bubble-wrapping" my robot and environment (replacing all the collision geometry with spheres), since the runtime of the custom constraints I'm using scales with the number of collide-able pairs in the scene (I'm currently using convex geometry to keep this number relatively low).
Any help would be appreciated!
but it seems that Drake allows AutoDiffXd signed distance queries only for spheres and half-spaces, not for more general convex shapes (like boxes or cylinders)
I think MinimumDistanceConstraint can handle more general geometries (including boxes and cylinders) for MultibodyPlant<double> and SceneGraph<double>. It calls FCL to compute the signed distance between these geometries (including witness points). It is true that these signed distance queries don't support AutodiffXd yet, but only double type. But as you will see later, you don't need the signed distance query with AutoDiffXd to compute the gradient of the distance.
You could try to construct MinimumDistanceConstraint for a MultibodyPlant<double> with this API. Although you use MultibodyPlant<double> not MultibodyPlant<AutoDiffXd>, MinimumDistanceConstraint can still evaluate with AutodiffXd. Specifically, it computes the gradient of the signed distance here. To compute the gradient of the signed distance query, the signed distance query doesn't need to support AutoDiffXd. We can compute the gradient using the witness points and the normal vectors as
∂d / ∂q = n̂_BA_Wᵀ * (∂p_CbCa_W / ∂q)
where d is the signed distance, q is the robot posture, n̂_BA_W is the contact normal, and ∂p_CbCa_W / ∂q is the Jacobian of the vector from witness point on body B to the witness point on body A. With this gradient, we solved collision-free inverse kinematics problem with MultibodyPlant<double>.
Are there functions within OpenCV that will 'track' a gradually changing curve without following sharply divergent crossing lines? Ex: If one were attempting to track individual outlines of two crossed boomerangs, is there an easy way to follow the curved line 'through' the intersection where the two boomerangs cross?
This would require some kind of inertial component that would continue a 'virtual' line when the curve was interrupted by the other crossed boomerang, and then find the continuation of the original line on the opposite side.
This seems simple, but it sounds so complicated when trying to explain it. :-) It does seem like a scenario that would occur often (attempting to trace an occluded object). Perhaps part of a third party library or specialized project?
I believe I have found an approach to this. OpenCV's approxPolyDP finds polygons to approximate the contour. It is relatively easy to track angles between the polygon's sides (as opposed to finding continuous tangents to curves). When an 'internal' angle is found where the two objects meet, it should be possible to match with a corresponding internal angle on the opposite side.
Ex: When two bananas/boomerangs/whatever overlap, the outline will form a sort of cross, with four points and four 'internal angles' (> 180 degrees). It should be possible to match the coordinates of the four internal angles. If their corresponding lines (last known trajectory before overlap) are close enough to parallel, then that indicates overlapping objects rather than one more complex shape.
approxPolyDP simplifies this to geometry and trig. This should be a much easier solution than what I had previously envisioned with continuous bezier curves and inertia. I should have thought of this earlier.
I have a field filled with obstacles, I know where they are located, and I know the robot's position. Using a path-finding algorithm, I calculate a path for the robot to follow.
Now my problem is, I am guiding the robot from grid to grid but this creates a not-so-smooth motion. I start at A, turn the nose to point B, move straight until I reach point B, rinse and repeat until the final point is reached.
So my question is: What kind of techniques are used for navigating in such an environment so that I get a smooth motion?
The robot has two wheels and two motors. I change the direction of the motor by turning the motors in reverse.
EDIT: I can vary the speed of the motors basically the robot is an arduino plus ardumoto, I can supply values between 0-255 to the motors on either direction.
You need feedback linearization for a differentially driven robot. This document explains it in Section 2.2. I've included relevant portions below:
The simulated robot required for the
project is a differential drive robot
with a bounded velocity. Since
the differential drive robots are
nonholonomic, the students are encouraged to use feedback linearization to
convert the kinematic control output
from their algorithms to control the
differential drive robots. The
transformation follows:
where v, ω, x, y are the linear,
angular, and kinematic velocities. L
is an offset length proportional to the
wheel base dimension of the robot.
One control algorithm I've had pretty good results with is pure pursuit. Basically, the robot attempts to move to a point along the path a fixed distance ahead of the robot. So as the robot moves along the path, the look ahead point also advances. The algorithm compensates for non-holonomic constraints by modeling possible paths as arcs.
Larger look ahead distances will create smoother movement. However, larger look ahead distances will cause the robot to cut corners, which may collide with obstacles. You can fix this problem by implementing ideas from a reactive control algorithm called Vector Field Histogram (VFH). VFH basically pushes the robot away from close walls. While this normally uses a range finding sensor of some sort, you can extrapolate the relative locations of the obstacles since you know the robot pose and the obstacle locations.
My initial thoughts on this(I'm at work so can't spend too much time):
It depends how tight you want or need your corners to be (which would depend on how much distance your path finder gives you from the obstacles)
Given the width of the robot you can calculate the turning radius given the speeds for each wheel. Assuming you want to go as fast as possible and that skidding isn't an issue, you will always keep the outside wheel at 255 and reduce the inside wheel down to the speed that gives you the required turning radius.
Given the angle for any particular turn on your path and the turning radius that you will use, you can work out the distance from that node where you will slow down the inside wheel.
An optimization approach is a very general way to handle this.
Use your calculated path as input to a generic non-linear optimization algorithm (your choice!) with a cost function made up of closeness of the answer trajectory to the input trajectory as well as adherence to non-holonomic constraints, and any other constraints you want to enforce (e.g. staying away from the obstacles). The optimization algorithm can also be initialised with a trajectory constructed from the original trajectory.
Marc Toussaint's robotics course notes are a good source for this type of approach. See in particular lecture 7:
http://userpage.fu-berlin.de/mtoussai/teaching/10-robotics/