PyDrake CollisionFilterManager Not Applying Filter - drake

I have a system that consists of a robotic manipulator and an object. I want to evaluate signed distances between all collision geometries in the system while excluding collisions between the fingertips of the end-effector and the convex geometries that make up the collision geometry of the object. However, when I use the CollisionFilterManager to try to apply the relevant exclusions, my code still computes signed distances between fingertips and object geometries when calling ComputeSignedDistancePairClosestPoints() much later on downstream.
I have a container class that has the plant and its SceneGraph as attributes. When initializing this class, I try to filter the collisions. Here are the relevant parts of the code:
class SimplifiedClass:
def __init__(self, ...):
# initializing plant, contexts, and query object port
self.diagram = function_generating_diagram_with_plant(...)
self.plant = self.diagram.GetSubsystemByName("plant")
self.scene_graph = self.diagram.GetSubsystemByName("scene_graph")
_diag_context = self.diagram.CreateDefaultContext()
self.plant_context = self.plant.GetMyMutableContextFromRoot(_diag_context)
self.sg_context = self.scene_graph.GetMyMutableContextFromRoot(_diag_context)
self.qo_port = self.scene_graph.get_query_output_port()
# applying filters
cfm = self.scene_graph.collision_filter_manager()
inspector = self.query_object.inspector()
fingertip_geoms = []
obj_collision_geoms = []
gids = inspector.GetAllGeometryIds()
for g in gids:
name = inspector.GetName(g)
# if name.endswith("ds_collision") or name.endswith("collision_1"):
if name.endswith("tip_collision_1") and "algr" in name:
fingertip_geoms.append(g)
elif name.startswith("obj_collision"):
obj_collision_geoms.append(g)
ftip_set = GeometrySet(fingertip_geoms)
obj_set = GeometrySet(obj_collision_geoms)
cfm.Apply(
CollisionFilterDeclaration()
.ExcludeBetween(ftip_set, obj_set)
.ExcludeWithin(obj_set)
)
#property
def query_object(self):
return self.qo_port.Eval(self.sg_context)
def function_that_computes_signed_distances(self, ...):
# this function calls ComputeSignedDistancePairClosestPoints(), which
# computes signed distances even between filtered geometry pairs
I've confirmed that the correct geometry IDs are being retrieved during initialization (at least the names are correct), but if I do a simple test where I see how many SignedDistancePairs are returned from calling ComputeSignedDistancePairClosestPoints() before and after the attempt to apply the filter, the same number of signed distance pairs are returned, which implies the filter had no effect even immediately after declaring it. I also confirm the geometries that should be filtered are not by examining the names associated with signed distance pairs during my downstream function call.
Is there an obvious bug in my code? If not, where else could the bug be located besides here?

The problem is an age-old problem: model vs context.
In short, SceneGraph stores an interior model so you can construct as you go. When you create a context a copy of that model is placed in the context. That copy is independent. If you continue to modify SceneGraph's model, you'll only observe a change in future contexts you allocate.
In your code above, you've already allocated a context. You acquire a collision filter manager using cfm = self.scene_graph.collision_filter_manager(). This is the SceneGraph model version. You want the other one where you get the manager from the context: self.scene_graph.collision_filter_manager(self.sg_context) as documented here.
Alternatively, you can modify the collision filters before you allocate a context. Or throw out the old context and reallocate. All are viable choices.

Related

Can you generate a scene graph after a plant has been finalized?

I'm working on a project that requires me to add a model through Parser (which requires the plant to be of the same type as the array used) before Setting the position of the model in said plant and taking distance queries. These queries only work when the query object generated from the scene graph is of type float.
I've run into a problem where setting the position doesn't work due to the array being used being of type AutoDiff. A possible solution would then be converting the plant of type float to Autodiff with plant.ToAutoDiff(), but this only creates a copy of the plant without coupling it to the scene graph (and in turn the query object) from which the queries are derived. Taking queries with a query object generated from the original plant would then fail to reflect the new position passed to the AutoDiff copy.
Is there a way to create a new scene graph from the already finalized symbolic copy of the original plant, so that I can perform the queries with it?
A couple of thoughts:
Don't just convert the plant to autodiff. Convert the whole diagram. That will give you a converted, connected network.
You're stuck with the current workflow. Presumably, your proximity geometries are specified in your parsed file (as <collision> tags). The parsing process is ephemeral. The declaration is consumed, passed through MultibodyPlant into SceneGraph. If there is no SceneGraph at parse time, all knowledge of the declared collision geometry is forgotten.
So, the typical workflow is:
Create a float-valued diagram.
Scalar convert it to an AutoDiff-valued diagram.
Keep both around to serve the different roles.
We don't have a tutorial that directly shows scalar converting an entire diagram, but it's akin to what is shown in this MultibodyPlant-specific tutorial. Just call ToScalarType() on the Diagram root.

Can I assume the running order of LeafSystem's CalcOutput function?

I am working on a LeafSystem like this:
class exampleLeafSystem(LeafSystem):
def __init__(self, plant):
self._plant = plant
self._plant_context = plant.CreateDefaultContext()
self.DeclareVectorIutputPort("q_v", BasicVector(6))
self.DeclareVectorOutputPort("tau", BasicVector(3), self.TauCalcOutput)
self.DeclareVectorOutputPort("xe", BasicVector(3), self.xeCalcOutput)
def TauCalcOutput(self, context, output):
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
# Do some calculation with self._plant and self._plant_context to get the output
output.SetFromVector(tau)
def xeCalcOutput(self, context, output):
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
# Do some calculation with self._plant and self._plant_context to get the output
output.SetFromVector(xe)
In the two methods TauCalcOutput and xeCalcOutput here, I need frist update the MultibodyPlant's state information and then do the calculation to compute the output. However, since I do not know the order of the two CalcOutput method being called, in order for this two method to use the newest state information, I have to write
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
in both methods, which seems a bit unnecessary. If I can assume the order of the CalcOutput methods being runned, for example, say that TauCalcOutput always runs first, then I can only have
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
in TauCalcOutput method, without worrying that XeCalcOutput method will use MultibodyPlant's state information that is one step lagged.
So my question is that is there a specific order for the CalcOutput methods being called?
No. The contract is that output ports can be called in any order at any time, and are only called when they are evaluated (e.g. by a downstream system). The order that they will be called will depend on the other systems in the Diagram; they might not all get called during a single simulation step (e.g. if one system is consumed by a discrete time system with time step 0.1, and another by time step 0.2), or may not be called at all if they are not connected. Users can even call get_output_port().Eval() manually.
For a general approach to avoiding duplicate computation in output ports, you should store the result of the shared computation in the Context, either as state or as a "cache entry". See DeclareCacheEntry for more details.
For this workflow, specifically, perhaps the simplest solution is to check whether the positions and velocities are already set to the same value, just to avoid repeating the kinematics evaluation, for instance as you see here:
https://github.com/RobotLocomotion/drake/blob/6e6e37ffa677362245773f13c0628f0042b47414/multibody/inverse_kinematics/kinematic_constraint_utilities.cc#L47-L54

How do you set up input ports in custom plants in Drake?

I am currently working to implement a nonlinear system in Drake, so I've been setting up a LeafSystem as my plant. Following this tutorial, I was able to get a passive version of my system functioning properly (no variable control input). But the tutorial doesn't cover defining inputs in these custom systems, and neither do any examples I was able to find. Could someone help me with a few questions I have listed below?
As shown in the code shown below, I simply guessed at the DeclareVectorInputPort function, but I feel like there should be a third callback similar to CopyStateOut. Should I add a callback that just sets the input port as the scalar phi (my desired voltage input)?
When I call CalcTimeDerivatives, how should I access phi? Should it be passed as another argument or should the aforementioned callback be used to set phi as some callable member of DEAContinuousSys?
At the end of CalcTimeDerivatives, I also feel like there should be a more efficient way to set all derivatives (both velocity and acceleration) to a single vector rather than individually assigning each index of derivatives, but I wasn't able to figure it out. Tips?
from pydrake.systems.framework import BasicVector, LeafSystem
class DEAContinuousSys(LeafSystem):
def __init__(self):
LeafSystem.__init__(self)
self.DeclareContinuousState(2) # two state variables: lam, lam_d
self.DeclareVectorOutputPort('Lam', BasicVector(2), self.CopyStateOut) # two outputs: lam_d, lam_dd
self.DeclareVectorInputPort('phi', BasicVector(1)) # help: guessed at this one, do I need a third argument?
# TODO: add input instead of constant phi
def DoCalcTimeDerivatives(self, context, derivatives):
Lam = context.get_continuous_state_vector() # get state
Lam = Lam.get_value() # cast as regular array
Lam_d = DEA.dynamics(Lam, None, phi) # derive acceleration (no timestep required) TODO: CONNECT VOLTAGE INPUT
derivatives.get_mutable_vector().SetAtIndex(0, Lam_d[0]) # set velocity
derivatives.get_mutable_vector().SetAtIndex(1, Lam_d[1]) # set acceleration
def CopyStateOut(self, context, output):
Lam = context.get_continuous_state_vector().CopyToVector()
output.SetFromVector(Lam)
continuous_sys = DEAContinuousSys()
You're off to a good start! Normally input ports are wired into some other system's output port so there is no need for them to have their own "Calc" method. However, you can also set them to fixed values, using port.FixValue(context, value) (the port object is returned by DeclareVectorInputPort()). Here is a link to the C++ documentation. I'm not certain of the Python equivalent but it should be very similar.
Hopefully someone else can point you to a Python example if the C++ documentation is insufficient.
You might take a look at the quadrotor2d example from the underactuated notes:
https://github.com/RussTedrake/underactuated/blob/master/underactuated/quadrotor2d.py#L44

TFF: How define tff.simulation.ClientData.from_clients_and_fn Function?

In the federated learning context, One such classmethod that should work would be tff.simulation.ClientData.from_clients_and_fn. Here, if I pass a list of client_ids and a function which returns the appropriate dataset when given a client id, you will have your hands on a fully functional ClientData.
I think here, an approach for defining the function I may use is to construct a Python dict which maps client IDs to tf.data.Dataset objects--you could then define a function which takes a client id, looks up the dataset in the dict, and returns the dataset.
So I define function as below but I think it is wrong, what do you think?
list = ["0","1","2"]
tab = {"0":ds, "1":ds, "2":ds}
def create_tf_dataset_for_client_fn(id):
return ds
source = tff.simulation.ClientData.from_clients_and_fn(list, create_tf_dataset_for_client_fn)
I suppose here that the 4 clients have the same dataset :'ds'
Creating a dict of (client_id, dataset) key-value pairs is a reasonable way to set up a tff.simulation.ClientData. Indeed, the code in the question will result in all clients have the same dataset since ds is return for all values of parameter id. One thing to watch out in pre-constructing a dict of datasets is that it may require loading the entire contents of the data into memory (may fail for large datasets).
Alternatively, constructing the dataset on-demand could reduce memory usage. One example might be to have a dict of (client_id, file path) key-value pairs. Something like:
dataset_paths = {
'client_0': '/tmp/A.txt',
'client_1': '/tmp/B.txt',
'client_2': '/tmp/C.txt',
}
def create_tf_dataset_for_client_fn(id):
path = dataset_paths.get(id)
if path is None:
raise ValueError(f'No dataset for client {id}')
return tf.data.Dataset.TextLineDataset(path)
source = tff.simulation.ClientData.from_clients_and_fn(
dataset_paths.keys(), create_tf_dataset_for_client_fn)
This is similar to the approach used in tff.simulation.FilePerUserClientData. It may be useful to look at the code of that class as an example.

How does one dynamically add new parameters to optimizers in Pytorch?

I was going through this post in the pytorch forum, and I also wanted to do this. The original post removes and adds layers but I think my situation is not that different. I also want to add layers or more filters or word embeddings. My main motivation is that the AI agent does not know the whole vocabulary/dictionary in advance because its large. I prefer strongly (for the moment) to not do character by character RNNs.
So what will happen for me is when the agent starts a forward pass it might find new words it has never seen and will need to add them to the embedding table (or perhaps add new filters before it starts the forward pass).
So what I want to make sure is:
embeddings are added correctly (at the right time, when a new computation graph is made) so that they are updatable by the optimizer
no issues with stored info of past parameters e.g. if its using some sort of momentum
How does one do this? Any sample code that works?
Just to add an answer to the title of your question: "How does one dynamically add new parameters to optimizers in Pytorch?"
You can append params at any time to the optimizer:
import torch
import torch.optim as optim
model = torch.nn.Linear(2, 2)
# Initialize optimizer
optimizer = optim.Adam(model.parameters(), lr=0.001, momentum=0.9)
extra_params = torch.randn(2, 2)
optimizer.param_groups.append({'params': extra_params })
#then you can print your `extra_params`
print("extra params", extra_params)
print("optimizer params", optimizer.param_groups)
That is a tricky question, as I would argue that the answer is "depends", in particular on how you want to deal with the optimizer.
Let's start with your specific problem - an embedding. In particular, you are asking on how to add embeddings to allow for a larger vocabulary dynamically. My first advice is, that if you have a good sense of an upper boundary of your vocabulary size, make the embedding large enough to cope with it from the beginning, as this is more efficient, and as you will need the memory eventually anyway. But this is not what you asked. So - to dynamically change your embedding, you'll need to overwrite your old one with a new one, and inform your optimizer of the change. You can simply do that whenever you run into an exception with your old embedding, in a try ... except block. This should roughly follow this idea:
# from within whichever module owns the embedding
# remember the already trained weights
old_embedding_weights = self.embedding.weight.data
# create a new embedding of the new size
self.embedding = nn.Embedding(new_vocab_size, embedding_dim)
# initialize the values for the new embedding. this does random, but you might want to use something like GloVe
new_weights = torch.randn(new_vocab_size, embedding_dim)
# as your old values may have been updated, you want to retrieve these updates values
new_weights[:old_vocab_size] = old_embedding_weights
self.embedding.weights.data.copy_(new_weights)
However, you should not do this for every single new word you receive, as this copying takes time (and a whole lot of memory, as the embedding exists twice for a short time - if you're nearly out memory, just make your embedding large enough from the start). So instead increase the size dynamically by a couple of hundred slots at a time.
Additionally, this first step already raises some questions:
How does my respective nn.Module know about the new embedding parameter?
The __setattr__ method of nn.Module takes care of that (see here)
Second, why don't I simply change my parameter? That's already pointing towards some of the problems of changing the optimizer: pytorch internally keeps references by object ID. This means that if you change your object, all these references will point towards a potentially incompatible object, as its properties have changed. So we should simply create a new parameter instead.
What about other nn.Parameters or nn.Modules that are not embeddings? These you treat the same. You basically just instantiate them, and attach them to their parent module. The __setattr__ method will take care of the rest. So you can do so completely dyncamically ...
Except, of course, the optimizer. The optimizer is the only other thing that "knows" about your parameters except for your main model-module. So you need to let the optimizer know of any change.
And this is tricky, if you want to be sophisticated about it, and very easy if you don't care about keeping the optimizer state. However, even if you want to be sophisticated about it, there is a very good reason why you probably should not do this anyways. More about that below.
Anyways, if you don't care, a simple
# simply overwrite your old optimizer
optimizer = optim.SGD(model.parameters(), lr=0.001)
will do. If you care, however, you want to transfer your old state, you can do so the same way that you can store, and later load parameters and optimizer states from disk: using the .state_dict() and .load_state_dict() methods. This, however, does only work with a twist:
# extract the state dict from your old optimizer
old_state_dict = optimizer.state_dict()
# create a new optimizer
optimizer = optim.SGD(model.parameters())
new_state_dict = optimizer.state_dict()
# the old state dict will have references to the old parameters, in state_dict['param_groups'][xyz]['params'] and in state_dict['state']
# you now need to find the parameter mismatches between the old and new statedicts
# if your optimizer has multiple param groups, you need to loop over them, too (I use xyz as a placeholder here. mostly, you'll only have 1 anyways, so just replace xyz with 0
new_pars = [p for p in new_state_dict['param_groups'][xyz]['params'] if not p in old_state_dict['param_groups'][xyz]['params']]
old_pars = [p for p in old_state_dict['param_groups'][xyz]['params'] if not p in new_state_dict['param_groups'][xyz]['params']]
# then you remove all the outdated ones from the state dict
for pid in old_pars:
old_state_dict['state'].pop(pid)
# and add a new state for each new parameter to the state:
for pid in new_pars:
old_state_dict['param_groups'][xyz]['params'].append(pid)
old_state_dict['state'][pid] = { ... } # your new state def here, depending on your optimizer
However, here's the reason why you should probably never update your optimizer like this, but should instead re-initialize from scratch, and just accept the loss of state information: When you change your computation graph, you change forward and backward computation for all parameters along your computation path (if you do not have a branching architecture, this path will be your entire graph). This more specifically means, that the input to your functions (=layer/nn.Module) will be different if you change some function (=layer/nn.Module) applied earlier, and the gradients will change if you change some function (=layer/nn.Module) applied later. That in turn invalidates the entire state of your optimizer. So if you keep your optimizer's state around, it will be a state computed for a different computation graph, and will probably end up in catastrophic behavior on part of your optimizer, if you try to apply it to a new computation graph. (I've been there ...)
So - to sum it up: I'd really recommend to try to keep it simple, and to only change a parameter as conservatively as possible, and not to touch the optimizer.
If you want to customize initial params:
from itertools import chain
l1 = nn.Linear(3,3)
l2 = nn.Linear(2,3)
optimizer = optim.SGD(chain(l1.parameters(), l2.parameters()), lr=0.01, momentum=0.9)
The key is that the first param of constructor receives iterator.

Resources