Applying an external force to an object in pydrake - drake

This questions is strongly related to adding-forces-to-body-post-finalize
I would like to be able to apply an external force to simple geometric primitives in pydrake.
This is to perform an evaluation of interactions between bodies.
My current implementation:
builder = DiagramBuilder()
plant = builder.AddSystem(MultibodyPlant(0.001))
parser = Parser(plant)
cube_instance = parser.AddModelFromFile('cube.urdf', model_name='cube')
plant.Finalize()
force = builder.AddSystem(ConstantVectorSource(np.zeros(6)))
builder.Connect(force.get_output_port(0), plant.get_applied_spatial_force_input_port())
diagram = builder.Build()
However when I run it, I get the following error:
builder.Connect(force.get_output_port(0), plant.get_applied_spatial_force_input_port())
RuntimeError: DiagramBuilder::Connect: Cannot mix vector-valued and abstract-valued ports while connecting output port y0 of System drake/systems/ConstantVectorSource#0000000002db5aa0 to input port applied_spatial_force of System drake/multibody/MultibodyPlant#0000000003118680
I have an inclination that I have to implement a LeafSystem which implements the abstract-value port on the plant.
Update based on Suggestion
Use: AbstractValue and ConstantValueSource
value = AbstractValue.Make([np.zeros(6)])
force = builder.AddSystem(ConstantValueSource(value))
ref_vector_externally_applied_spatial_force = plant.get_applied_spatial_force_input_port()
builder.Connect(force.get_output_port(0), ref_vector_externally_applied_spatial_force)
I got the following error:
RuntimeError: DiagramBuilder::Connect: Mismatched value types while connecting
output port y0 of System drake/systems/ConstantValueSource#0000000002533a30 (type pybind11::object) to
input port applied_spatial_force of System drake/multibody/MultibodyPlant#0000000002667760 (type std::vector<drake::multibody::ExternallyAppliedSpatialForce<double>,std::allocator<drake::multibody::ExternallyAppliedSpatialForce<double>>>)
Which makes sense the types of the input-output ports should match.
The expected type seems to be a vector of ExternallyAppliedSpatialForce.
I then changed the Abstract type as follows:
value = AbstractValue.Make(ExternallyAppliedSpatialForce())
RuntimeError: DiagramBuilder::Connect: Mismatched value types while connecting
output port y0 of System drake/systems/ConstantValueSource#0000000002623980 (type drake::multibody::ExternallyAppliedSpatialForce<double>)
to input port applied_spatial_force of System drake/multibody/MultibodyPlant#00000000027576b0 (type std::vector<drake::multibody::ExternallyAppliedSpatialForce<double>,std::allocator<drake::multibody::ExternallyAppliedSpatialForce<double>>>)
I am getting closer. However, I was not able to send a vector of ExternallyAppliedSpatialForce. If I try to send it as a list, I get a complaint that it is unable to pickle the object. I did not see in the AbstractValue examples how to create such a vector of objects.
Any additional help would be greatly appreciated.
Solution was to use the type VectorExternallyAppliedSpatialForced
full solution to be posted later.

Here is a working example of applying an external static force to a rigid object:
sim_time_step = 0.001
builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, sim_time_step)
object_instance = Parser(plant).AddModelFromFile('box.urdf')
scene_graph.AddRenderer("renderer", MakeRenderEngineVtk(RenderEngineVtkParams()))
ConnectDrakeVisualizer(builder, scene_graph)
plant.Finalize()
force_object = ExternallyAppliedSpatialForce()
force_object.body_index = plant.GetBodyIndices(object_instance).pop()
force_object.F_Bq_W = SpatialForce(tau=np.zeros(3), f=np.array([0., 0., 10.]))
forces = VectorExternallyAppliedSpatialForced()
forces.append(force_object)
value = AbstractValue.Make(forces)
force_system = builder.AddSystem(ConstantValueSource(value))
builder.Connect(force_system.get_output_port(0), plant.get_applied_spatial_force_input_port())
diagram = builder.Build()
simulator = Simulator(diagram)
context = simulator.get_mutable_context()
plant.SetPositions(context, object_instance, [0, 0, 0, 1, 0, 0, 0])
time_ = 0
while True:
time_ += sim_time_step
simulator.AdvanceTo(time_)
time.sleep(sim_time_step)
However, I was not able to change externally applied force in the simulation loop afterwards.
To achieve this I had to make a LeafSystem.
An implementation that allows you to change the force applied to a rigid object over time can be found here
Both static and dynamic examples can be found here.

No need to implement a LeafSystem — we have a ConstantValueSource that is analogous to ConstantVectorSource, but for Abstract types.
https://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_constant_value_source.html
And you’re correct that you need the abstract type for this port: https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_multibody_plant.html#ab2ad1faa7547d440f008cdddd32d85e8
Some examples of working with the abstract values from python can be found here: https://github.com/RobotLocomotion/drake/blob/d6133a04/bindings/pydrake/systems/test/value_test.py#L84

Related

What does 'RuntimeError: PyFunctionConstraint: Output must be of scalar type AutoDiffXd, but unable to infer scalar type.' mean?

When attempting to solve my mathematical program, drake produces the RuntimeError: PyFunctionConstraint: Output must be of scalar type AutoDiffXd, but unable to infer scalar type.
The constraint responsible for the error is as follows:
def non_penetration_constraint(q):
plant_ad.SetPositions(context_ad, q)
Distances = np.zeros(6, dtype=q.dtype)
i = 0
for name in Claw_name:
g_ids = body_geometries[name]
assert(len(g_ids) == 1)
Distances[i] = query_object.ComputeSignedDistancePairClosestPoints(
Worldbody_id, g_ids[0]).distance
i += 1
return Distances
Called with:
for n in range(N-1):
# Non penetration constraint
prog.AddConstraint(non_penetration_constraint,
lb = [0] * nf,
ub = [0] * nf,
vars = (q[:, n]))
Has anybody had a similar issue before?
I suspect that your query_object (and perhaps other values) are not from the autodiff version of your plant/scene_graph.
Constraints implemented like this can potentially be called with q[0] either a type float or type AutoDiffXd. See the "writing custom evaluator" section of this Drake tutorial. In your case, the return value from your constraint is coming out of the query_object, which is not impacted directly by the plant_ad.SetPositions call (which seems suspicious). You'll want to make sure the query_object is generated from the autodiff scene_graph, and presumably after you set the positions to your constraint value.

Constraints for Direct Collocation method in pydrake

I am looking for a way to describe the constraints of the Direct Collocation method in pydrake.
I got the robot model from my own URDF by using FindResource as this(l11-16).
Then, I tried to make some functions which calculate the positions of the joints as swing_foot_height(q) of this.
However there is a problem.
It is maybe a type error.
I defined q as following
robot = MultibodyPlant(time_step=0.0)
scene_graph = SceneGraph()
robot.RegisterAsSourceForSceneGraph(scene_graph)
file_name = FindResource("models/robot.urdf")
Parser(robot).AddModelFromFile(file_name)
robot.Finalize()
context = robot.CreateDefaultContext()
dircol = DirectCollocation(
robot,
context,
...(Omission)...
input_port_index=robot.get_actuation_input_port().get_index())
x = dircol.state()
nq = biped_robot.num_positions()
q = x[0:nq]
Then, I used this q for the function like swing_foot_height(q).
The error is like
SetPositions(): incompatible function arguments. The following argument types are supported:
...
q: numpy.ndarray[numpy.float64[m, 1]]
...
Invoked with:
...
array([Variable('x(0)', Continuous), ... Variable('x(9)', Continuous)],dtype=object)
Are there some way to avoid this error?
Right. In the compass gait notebook that you cited, there was an important line:
# overwrite MultibodyPlant with its autodiff copy
compass_gait = compass_gait.ToAutoDiffXd()
so that multibody plant that was being used in the constraint is actually an AutoDiffXd version of the plant.
The littledog notebook has more examples of this, with a more robust implementation that works for both float and autodiff constraint evaluations.
As far as I understand this, trajectory optimization with DirectCollocation converts the data type of the decision variables (in your case, x and q) to AutoDiffXd type. That is the type you're seeing here in the "Invoked with" error message. This is the type used for automatic differentiation which is used for finding the gradients for the optimization solver.
You'll need to convert back to float to use the SetPositions() function.

Different access methods to Pyro Paramstore give different results

I am following the Pyro introductory tutorial in forecasting, and trying to access the learned parameters after training the model, I get different results using different access methods for some of them (while getting identical results for others).
Here is the stripped-down reproducible code from the tutorial:
import torch
import pyro
import pyro.distributions as dist
from pyro.contrib.examples.bart import load_bart_od
from pyro.contrib.forecast import ForecastingModel, Forecaster
pyro.enable_validation(True)
pyro.clear_param_store()
pyro.__version__
# '1.3.1'
torch.__version__
# '1.5.0+cu101'
# import & prepare the data
dataset = load_bart_od()
T, O, D = dataset["counts"].shape
data = dataset["counts"][:T // (24 * 7) * 24 * 7].reshape(T // (24 * 7), -1).sum(-1).log()
data = data.unsqueeze(-1)
T0 = 0 # begining
T2 = data.size(-2) # end
T1 = T2 - 52 # train/test split
# define the model class
class Model1(ForecastingModel):
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1)
feature_dim = covariates.size(-1)
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
prediction = bias + (weight * covariates).sum(-1, keepdim=True)
assert prediction.shape[-2:] == zero_data.shape
noise_scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Normal(0, noise_scale)
self.predict(noise_dist, prediction)
# fit the model
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = torch.stack([time], dim=-1)
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1)
So far so good; now, I want to inspect the learned latent parameters stored in Paramstore. Seems there are more than one ways to do this; using the get_all_param_names() method:
for name in pyro.get_param_store().get_all_param_names():
print(name, pyro.param(name).data.numpy())
I get
AutoNormal.locs.bias [14.585433]
AutoNormal.scales.bias [0.00631594]
AutoNormal.locs.weight [0.11947815]
AutoNormal.scales.weight [0.00922901]
AutoNormal.locs.noise_scale [-2.0719821]
AutoNormal.scales.noise_scale [0.03469057]
But using the named_parameters() method:
pyro.get_param_store().named_parameters()
gives the same values for the location (locs) parameters, but different values for all scales ones:
dict_items([
('AutoNormal.locs.bias', Parameter containing: tensor([14.5854], requires_grad=True)),
('AutoNormal.scales.bias', Parameter containing: tensor([-5.0647], requires_grad=True)),
('AutoNormal.locs.weight', Parameter containing: tensor([0.1195], requires_grad=True)),
('AutoNormal.scales.weight', Parameter containing: tensor([-4.6854], requires_grad=True)),
('AutoNormal.locs.noise_scale', Parameter containing: tensor([-2.0720], requires_grad=True)),
('AutoNormal.scales.noise_scale', Parameter containing: tensor([-3.3613], requires_grad=True))
])
How is this possible? According to the documentation, Paramstore is a simple key-value store; and there are only these six keys in it:
pyro.get_param_store().get_all_param_names() # .keys() method gives identical result
# result
dict_keys([
'AutoNormal.locs.bias',
'AutoNormal.scales.bias',
'AutoNormal.locs.weight',
'AutoNormal.scales.weight',
'AutoNormal.locs.noise_scale',
'AutoNormal.scales.noise_scale'])
so, there is no way that one method access one set of items and the other a different one.
Am I missing something here?
pyro.param() returns transformed parameters in this case to the positive reals for scales.
Here is the situation, as revealed in the Github thread I opened in parallel with this question...
Paramstore is no more just a simple key-value store - it also performs constraint transformations; quoting a Pyro developer from the above link:
here's some historical background. The ParamStore was originally just a key-value store. Then we added support for constrained parameters; this introduced a new layer of separation between user-facing constrained values and internal unconstrained values. We created a new dict-like user-facing interface that exposed only constrained values, but to keep backwards compatibility with old code we kept the old interface around. The two interfaces are distinguished in the source files [...] but as you observe it looks like we forgot to mark the old interface as DEPRECATED.
I guess in clarifying docs we should:
clarify that the ParamStore is no longer a simple key-value store
but also performs constraint transforms;
mark all "old" style interface methods as DEPRECATED;
remove "old" style interface usage from examples and tutorials.
As a consequence, it turns out that, while pyro.param() returns the results in the constrained (user-facing) space, the older method named_parameters() returns the unconstrained (i.e. for internal use only) values, hence the apparent discrepancy.
It's not difficult to verify indeed that the scales values returned by the two methods above are related by a logarithmic transformation:
import numpy as np
items = list(pyro.get_param_store().named_parameters()) # unconstrained space
i = 0
for name in pyro.get_param_store().keys():
if 'scales' in name:
temp = np.log(
pyro.param(name).item() # constrained space
)
print(temp, items[i][1][0].item() , np.allclose(temp, items[i][1][0].item()))
i+=1
# result:
-5.027793402915326 -5.0277934074401855 True
-4.600319371162187 -4.6003193855285645 True
-3.3920585732532835 -3.3920586109161377 True
Why does this discrepancy affect only scales parameters? That's because scales (i.e. essentially variances) are by definition constrained to be positive; that doesn't hold for locs (i.e. means), which are not constrained, hence the two representations coincide for them.
As a result of the question above, a new bullet has now been added in the Paramstore documentation, giving a relevant hint:
in general parameters are associated with both constrained and unconstrained values. for example, under the hood a parameter that is constrained to be positive is represented as an unconstrained tensor in log space.
as well as in the documentation of the named_parameters() method of the old interface:
Note that, in the event the parameter is constrained, unconstrained_value is in the unconstrained space implicitly used by the constraint.

How to create LinearQuadraticRegulator for Acrobot system using pydrake

I am trying to create LQR for acrobot system from scratch:
file_name = "acrobot.sdf" # from drake/multibody/benchmarks/acrobot/acrobot.sdf
acrobot = MultibodyPlant()
parser = Parser(plant=acrobot)
parser.AddModelFromFile(file_name)
acrobot.AddForceElement(UniformGravityFieldElement([0, 0, -9.81]))
acrobot.Finalize()
acrobot_context = acrobot.CreateDefaultContext()
shoulder = acrobot.GetJointByName("ShoulderJoint")
elbow = acrobot.GetJointByName("ElbowJoint")
shoulder.set_angle(context=acrobot_context, angle=0.0)
elbow.set_angle(context=acrobot_context, angle=0.0)
Q = np.identity(4)
R = np.identity(1)
N = np.zeros([4, 4])
controller = LinearQuadraticRegulator(acrobot, acrobot_context, Q, R)
Running this script I receive error at the last string:
RuntimeError: Vector-valued input port acrobot_actuation must be either fixed or connected to the output of another system.
None of my approaches to fix/connect input ports were eventually successful.
P.S. I know that there exists AcrobotPlant, but the idea is to create LQR from sdf on the run.
P.P.S. Why acrobot.get_num_input_ports() return 5 instead of 1?
Here are the deltas that I had to apply to have it at least pass that error:
https://github.com/EricCousineau-TRI/drake/commit/e7167fb8a
Main notes:
You had either (a) use plant_context.FixInputPort on the relevant ports, or (b) use DiagramBuilder to compose systems by using AddSystem + Connect(output_port, input_port.
I'd recommend naming the MBP instance plant, so that way you can refer to model instances directly.
Does this help some?
P.P.S. Why acrobot.get_num_input_ports() return 5 instead of 1?
It's because it's a MultibodyPlant instance, which has several more ports. Preview from plot_system_graphviz:

F# / Simplest way to validate array length at COMPILE time

I have some scientific project. There are vectors / square matrices of various lengths there. Obviously (for example) a vector of length 2 cannot be added to a vector of length 3 (and so on and so forth). There are several NET libraries, which deal with vectors / matrices. All of them either have generic vectors / matrices OR have some very specific vectors / matrices, which do not suite the needs.
Most, if not all, of these libraries can create a vector from a list or array. Unfortunately, If I mistakenly give an input array of the wrong length, then I will get a vector of the wrong length and then everything will blow up at run time!
I wonder if it is possible to check array length at compile time so that to get a compile error if, let’s say, I try to pass a 5-element array to a vector of length 2 “constructor”. After all, printfn does almost that!
F# type providers come to mind, but I am not sure how to apply them here.
Thanks a lot!
Thanks to the OP for an interesting question. My answer frequency has dropped not because of unwillingness to help but rather that there a few questions that tickles my interest.
We don't have dependent types in F# and F# doesn't support generics with numerical type arguments (like C++).
However we could create distinct types for different dimensions like Dim1, Dim2 and so on and provide them as type arguments.
This would allow us to have a type signature for apply that applies a vector a matrix like this:
let apply (m : Matrix<'R, 'C>) (v : Vector<'C>) : Vector<'R> = …
The code won't compile unless the columns of the matrix matches the length of the vector. In addition; the resulting vector has the length that is rows of the columns.
One way to do this is defining an interface IDimension and some concrete implementions representing the different dimensions.
type IDimension =
interface
abstract Size : int
end
type Dim1 () = class interface IDimension with member x.Size = 1 end end
type Dim2 () = class interface IDimension with member x.Size = 2 end end
The vector and the matrix can then be implemented like this
type Vector<'Dim when 'Dim :> IDimension
and 'Dim : (new : unit -> 'Dim)
> () =
class
let dim = new 'Dim()
let vs = Array.zeroCreate<float> dim.Size
member x.Dim = dim
member x.Values = vs
end
type Matrix<'RowDim, 'ColumnDim when 'RowDim :> IDimension
and 'RowDim : (new : unit -> 'RowDim)
and 'ColumnDim :> IDimension
and 'ColumnDim : (new : unit -> 'ColumnDim)
> () =
class
let rowDim = new 'RowDim()
let columnDim = new 'ColumnDim()
let vs = Array.zeroCreate<float> (rowDim.Size*columnDim.Size)
member x.RowDim = rowDim
member x.ColumnDim = columnDim
member x.Values = vs
end
Finally this allows us to write code like this:
let m76 = Matrix<Dim7, Dim6> ()
let v6 = Vector<Dim6> ()
let v7 = apply m76 v6 // Vector<Dim7>
// Doesn't compile because v7 has the wrong dimension
let vv = apply m76 v7
If you need a wide range of dimensions (because you have an algebra increments/decrements the dimensions of vectors/matrices) you could support that using some smart variant of church numerals.
If this is usable or not is entirely up the reader I think.
PS.
Perhaps unit of measures could have been used for this as well if they applied to more types than floats.
The general term for what you're looking for is dependent types, but F# does not support them.
I've seen an experiment in using type providers to mimic one particular flavor of dependent types (constraining the domain of a primitive type), but I wouldn't expect it to be possible to achieve what you want using type providers in their current form. They seem to be too whimsical for that.
Print format strings appear to be doing that (and in fact printers are a "Hello World" application for dependent types), but actually they work because they get special treatment by the compiler, and the mechanism for that is not extensible.
You're doomed to ensure correct lengths at runtime.
My best bet would be to use structs to encode actual vectors and ensure correctness on the API level that way, map them to arrays at the point where you're interacting with those matrix algebra libraries, then map the results back to structs with ample assertions when done.
The comment from #Justanothermetaprogrammer qualifies as an answer. Here is how it works in the real example. The matrix implementation in the example is based on MathNet.Numerics.LinearAlgebra:
open MathNet.Numerics.LinearAlgebra
type RealMatrix2x2 =
| RealMatrix2x2 of Matrix<double>
static member private createInternal (a : #seq<#seq<double>>) =
matrix a |> RealMatrix2x2
static member create
(
(a11, a12),
(a21, a22)
) =
RealMatrix2x2.createInternal
[|
[| a11; a12|]
[| a21; a22|]
|]
let m2 =
(
(1., 2.),
(3., 4.)
)
|> RealMatrix2x2.create
The tuple signatures and "re-mapping" into #seq<#seq<double>> can be easily code-generated using, for example, Excel or any other convenient tool for as many dimensions as necessary. In fact, the whole class along with any other necessary operator overrides (like multiplication of RealMatrix2x2 by RealMatrix2x2, ...) can be code generated for all necessary dimensions.

Resources