How to use inverse dynamics controller on under-actuated systems - drake

I'm trying to use Drake's inverse dyanmics controller on an arm with a floating base, and based on this discussion it seems like the most straightforward way to go about this is to use two separate plants since the controller only supports fully actuated systems.
Following Python bindings error when adding two plants to a scene graph in pyDrake, I attempted to create two plants using the following code:
def register_plant_with_scene_graph(scene_graph, plant):
plant.RegsterAsSourceForSceneGraph(scene_graph)
builder.Connect(
plant.get_geometry_poses_output_port(),
scene_graph.get_source_pose_port(plant.get_source_id()),
)
builder.Connect(
scene_graph.get_query_output_port(),
plant.get_geometry_query_input_port(),
)
builder = DiagramBuilder()
scene_graph = builder.AddSystem(SceneGraph())
plant_1 = builder.AddSystem(MultibodyPlant(time_step=0.0))
register_plant_with_scene_graph(scene_graph, plant_1)
plant_2 = builder.AddSystem(MultibodyPlant(time_step=0.0))
register_plant_with_scene_graph(scene_graph, plant_2)
which produced the error
AttributeError: 'MultibodyPlant_[float]' object has no attribute 'RegsterAsSourceForSceneGraph'
Which seems odd because according to the documentation, the function should exist.
Is this function available in the python bindings for drake? Also, more broadly, is this the correct way to approach using the inverse dynamics controller on a free-floating manipulator?

Inverse dynamics takes desired positions, velocities, and accelerations and computes the required torques. If your robot has a floating base, then you cannot accept arbitrary acceleration commands. For instance the total center of mass of your robot will be falling according to gravity; any acceleration that does not satisfy this requirement will not have a feasible solution to the inverse dynamics. I think there must be something more that we need to understand about your problem formulation.
Often when people ask this question, they are thinking of a robot that is relying on contact forces in addition to generalized force/torques in order to achieve the requested accelerations. In that case, the problem needs to include those contact forces as decision variables, too. Since contact forces have unilateral constraints (e.g. feet cannot pull on the ground), and friction cone constraints, this inverse dynamics problem is almost always formulated as a quadratic program. For instance, as in this paper. We don't currently provide that QP formulation in Drake, but it is not hard to write it against the MathematicalProgram interface. And we do have some older code that was removed from Drake (since it wasn't actively developed) that we can point you to if it helps.

Related

Correspondence relationship between the URDF file and the state of MultibodyPlant in drake

In the case of Compass Gait Limit Cycle, the system is obtained from 'models/compass_gait_limit_cycle.urdf' and the context is from CreateDefaultContext().
These define the configuration vector q as [x, y, θ_1, θ_2]^T in my recognition.
In the case that URDF model is more complicated(e.g. having several branched structures), I could obtain the system and the context as the same way, but I could not understand which component of q is corresponding to the joint in URDF file.
Are there some way to know the correspondence relationship?
The best way I need is get the index of q from the name of the joint from URDF file.
But anything is ok if I could know the correspondence.
In addition, I want to know the way to rearrange the order of components of q(and also of qd and qdd).
For example, in the case of compass gait, the default order is [x, y, θ_1, θ_2] and the new order is [θ_1, θ_2, x, y] or something like this.
Perhaps, creating context by myself is the best way.
But I do not understand the context perfectly, so I need some instruction to crate it.
I am using pydrake on ubuntu 20.04.
Thank you for your answer.
If you are using MultibodyPlant (loaded from URDF), then MultibodyPlant controls the order of the variables in the context state vector. As you say, it is intended that you access them from the joint accessors. In python, I have a simple method that packs them into a namedview, which is very convenient for working with them: https://github.com/RussTedrake/underactuated/blob/ee7a2041047772f823d0dc0dd32e992196b2a670/underactuated/utils.py#L32-L86
You can see how I use this in a number of examples in the underactuated notes. Perhaps the littledog example shows it best.
If you want to use them in a different order in a different system, then I would recommend adding something like a MatrixGain system that implements a permutation matrix to rearrange them.

Computing the generalized forces arising due to ExternallyAppliedSpatialForces in Drake, e.g. due to Propeller

I am working with a diagram which includes a MultiBodyPlant with a Propeller connected to it. The Propeller actually realizes numerous physical propellers which are distributed among the bodies of the MultiBodyPlant.
I am able to simulate the dynamics of the combined system by setting the prop forces with FixValue, so I'm on the right track.
What I'd like to be able to do is, given a configuration for the system (i.e. the MultiBodyPlant context) and a chosen propeller command, compute the generalized forces acting on the system. My sense is that this is not immediately available since the simulation is actually using the RNEA, and so does not aggregate the forces all together in that way. For what I'm doing (and even just as a sanity check), I would like to compute the forces directly, not just their effect on the evolution of the state.
Is there an existing method to compute this built into Drake, or should I compute it manually using the spatial jacobian of each propeller frame and the applied SpatialForce of the corresponding propeller? (Something along the lines of this question: How to get the matrix that maps external forces to generalized forces?)
Many thanks for your help.
I understand better now. It's a very reasonable request! You have two systems at play: the Propeller and MultibodyPlant. Unfortunately, the quantity you want is all of Propeller and just a piece of MultibodyPlant. We don't offer direct access to the B(q) matrix in that case.
What you can do is call with either AutoDiffXd or symbolic::Expression, call CalcImplicitTimeDerivativesResidual on the Diagram to get the entire dynamics in implicit form (to avoid taking M(q) inverse). You could call it twice -- once with the Propeller inputs set up via FixValue as AutoDiffXd and/or symbolic::Variable and again with them as zero, then subtract the difference.
Note: CalcImplicitTimeDerivativesResidual is relatively new; I haven't pushed the python bindings for it yet (but it's been on my list). Do you need it from python?
I think that perhaps you are looking for the MultibodyPlant reaction_forces output port?

PrismaticJoint limits not enforced

I'm creating a system of bodies with radially expanding bodies connected with PrismaticJoints, and finding that, although I initialized each joint with joint position limits, the joints pass these limits due to external forces like gravity easily. Here is a plot showing some joints' translations over time to show how they pass the lower and upper limits at 3.5 and 4.2:
What am I missing? My call to create the joints looks like this:
const multibody::Joint<double>& joint = plant_->AddJoint<drake::multibody::PrismaticJoint>(
shpere_name + "_joint",
center_body, std::nullopt,
connect_body, std::nullopt,
unitVlist()[j], r_low, r_upp, 0);
where
*_body are bodies,
unitVlist() returns a list of unit vectors to pull from,
r_low and r_upp are doubles corresponding to the lower and upper limits.
Currently joint limits in Drake are only enforced by the discrete solver, that is, what you get if you supply a time step in the MultibodyPlant constructor. Our continuous integrators don't see the limits yet. We are aware of that but I couldn't actually find a GitHub issue complaining about it -- would you mind filing one? You can do it here (select "New Issue").

How to Implement Integrated Random Walk Trend Component in Tensorflow Probability

I'm running tensorflow 2.1 and tensorflow_probability 0.9. I have fit a Structural Time Series Model with a seasonal component.
I wish to implement an Integrated Random walk in order to smooth the trend component, as per Time Series Analysis by State Space Methods: Second Edition, Durbin & Koopman. The integrated random walk is achieved by setting the level component variance to equal 0.
Is implementing this constraint possible in Tensorflow Probability?
Further to this in Durbin & Koopman, higher order random walks are discussed. Could this be implemented?
Thanks in advance for your time.
If I understand correctly, an integrated random walk is just the special case of LocalLinearTrend in which the level simply integrates the randomly-evolving slope component (ie it has no independent source of variation). You could patch this in by subclassing LocalLinearTrend and fixing level_scale = 0. in the models it builds:
class IntegratedRandomWalk(sts.LocalLinearTrend):
def __init__(self,
slope_scale_prior=None,
initial_slope_prior=None,
observed_time_series=None,
name=None):
super(IntegratedRandomWalk, self).__init__(
slope_scale_prior=slope_scale_prior,
initial_slope_prior=initial_slope_prior,
observed_time_series=observed_time_series,
name=None)
# Remove 'level_scale' parameter from the model.
del self._parameters[0]
def _make_state_space_model(self,
num_timesteps,
param_map,
initial_state_prior=None,
initial_step=0):
# Fix `level_scale` to zero, so that the level
# cannot change except by integrating the
# slope.
param_map['level_scale'] = 0.
return super(IntegratedRandomWalk, self)._make_state_space_model(
num_timesteps=num_timesteps,
param_map=param_map,
initial_state_prior=initial_state_prior,
initial_step=initial_step)
(it would be mathematically equivalent to build a LocalLinearTrend with level_scale_prior concentrated at zero, but that constraint makes inference difficult, so it's generally better to just ignore or remove the parameter entirely, as I did here).
By higher-order random walks, do you mean autoregressive models? If so, sts.Autoregressive might be relevant.

Matrix Concatenation using Actionscript Matrix3D

I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.

Resources