I'm creating a system of bodies with radially expanding bodies connected with PrismaticJoints, and finding that, although I initialized each joint with joint position limits, the joints pass these limits due to external forces like gravity easily. Here is a plot showing some joints' translations over time to show how they pass the lower and upper limits at 3.5 and 4.2:
What am I missing? My call to create the joints looks like this:
const multibody::Joint<double>& joint = plant_->AddJoint<drake::multibody::PrismaticJoint>(
shpere_name + "_joint",
center_body, std::nullopt,
connect_body, std::nullopt,
unitVlist()[j], r_low, r_upp, 0);
where
*_body are bodies,
unitVlist() returns a list of unit vectors to pull from,
r_low and r_upp are doubles corresponding to the lower and upper limits.
Currently joint limits in Drake are only enforced by the discrete solver, that is, what you get if you supply a time step in the MultibodyPlant constructor. Our continuous integrators don't see the limits yet. We are aware of that but I couldn't actually find a GitHub issue complaining about it -- would you mind filing one? You can do it here (select "New Issue").
Related
I'm trying to use Drake's inverse dyanmics controller on an arm with a floating base, and based on this discussion it seems like the most straightforward way to go about this is to use two separate plants since the controller only supports fully actuated systems.
Following Python bindings error when adding two plants to a scene graph in pyDrake, I attempted to create two plants using the following code:
def register_plant_with_scene_graph(scene_graph, plant):
plant.RegsterAsSourceForSceneGraph(scene_graph)
builder.Connect(
plant.get_geometry_poses_output_port(),
scene_graph.get_source_pose_port(plant.get_source_id()),
)
builder.Connect(
scene_graph.get_query_output_port(),
plant.get_geometry_query_input_port(),
)
builder = DiagramBuilder()
scene_graph = builder.AddSystem(SceneGraph())
plant_1 = builder.AddSystem(MultibodyPlant(time_step=0.0))
register_plant_with_scene_graph(scene_graph, plant_1)
plant_2 = builder.AddSystem(MultibodyPlant(time_step=0.0))
register_plant_with_scene_graph(scene_graph, plant_2)
which produced the error
AttributeError: 'MultibodyPlant_[float]' object has no attribute 'RegsterAsSourceForSceneGraph'
Which seems odd because according to the documentation, the function should exist.
Is this function available in the python bindings for drake? Also, more broadly, is this the correct way to approach using the inverse dynamics controller on a free-floating manipulator?
Inverse dynamics takes desired positions, velocities, and accelerations and computes the required torques. If your robot has a floating base, then you cannot accept arbitrary acceleration commands. For instance the total center of mass of your robot will be falling according to gravity; any acceleration that does not satisfy this requirement will not have a feasible solution to the inverse dynamics. I think there must be something more that we need to understand about your problem formulation.
Often when people ask this question, they are thinking of a robot that is relying on contact forces in addition to generalized force/torques in order to achieve the requested accelerations. In that case, the problem needs to include those contact forces as decision variables, too. Since contact forces have unilateral constraints (e.g. feet cannot pull on the ground), and friction cone constraints, this inverse dynamics problem is almost always formulated as a quadratic program. For instance, as in this paper. We don't currently provide that QP formulation in Drake, but it is not hard to write it against the MathematicalProgram interface. And we do have some older code that was removed from Drake (since it wasn't actively developed) that we can point you to if it helps.
I am working with a diagram which includes a MultiBodyPlant with a Propeller connected to it. The Propeller actually realizes numerous physical propellers which are distributed among the bodies of the MultiBodyPlant.
I am able to simulate the dynamics of the combined system by setting the prop forces with FixValue, so I'm on the right track.
What I'd like to be able to do is, given a configuration for the system (i.e. the MultiBodyPlant context) and a chosen propeller command, compute the generalized forces acting on the system. My sense is that this is not immediately available since the simulation is actually using the RNEA, and so does not aggregate the forces all together in that way. For what I'm doing (and even just as a sanity check), I would like to compute the forces directly, not just their effect on the evolution of the state.
Is there an existing method to compute this built into Drake, or should I compute it manually using the spatial jacobian of each propeller frame and the applied SpatialForce of the corresponding propeller? (Something along the lines of this question: How to get the matrix that maps external forces to generalized forces?)
Many thanks for your help.
I understand better now. It's a very reasonable request! You have two systems at play: the Propeller and MultibodyPlant. Unfortunately, the quantity you want is all of Propeller and just a piece of MultibodyPlant. We don't offer direct access to the B(q) matrix in that case.
What you can do is call with either AutoDiffXd or symbolic::Expression, call CalcImplicitTimeDerivativesResidual on the Diagram to get the entire dynamics in implicit form (to avoid taking M(q) inverse). You could call it twice -- once with the Propeller inputs set up via FixValue as AutoDiffXd and/or symbolic::Variable and again with them as zero, then subtract the difference.
Note: CalcImplicitTimeDerivativesResidual is relatively new; I haven't pushed the python bindings for it yet (but it's been on my list). Do you need it from python?
I think that perhaps you are looking for the MultibodyPlant reaction_forces output port?
Currently, I am trying to visualize multiple robots in drake. This means that dynamics will not be taken into account of and that MBP will not be a system. MBP is only used to add urdf models into the world and assist for scene graph.
How I removed dynamics and made it purely a visualizer is as follows:
I sent robot joint positions to
builder.AddSystem<MultibodyPositionToGeometryPose<double>>(plant);
and then connected this to
scene_graph.get_source_pose_port(plant.get_source_id().value()));
Part of the code is below:
// Two robots are in the simulations. Their position states will be muxed into one
// in order to send to MultibodyPositionToGeometryPose
// For now, robotA's position states will come from a constant zero vector,
// while robotB's will come from a receiver.
std::vector<int> mux_sizes = { 14,27 };
auto plant_states_mux =
builder.AddSystem<systems::Multiplexer>(mux_sizes);
auto robots_to_pose =
builder.AddSystem<MultibodyPositionToGeometryPose<double>>(plant);
VectorX<double> constant_vector = VectorX<double>::Zero(14);
auto constant_zero_source =
builder.template AddSystem<systems::ConstantVectorSource<double>>(
constant_vector);
builder.Connect(constant_zero_source->get_output_port(),
plant_states_mux->get_input_port(robotA_id));
builder.Connect(robotB_state_receiver->get_position_measured_output_port(),
plant_states_mux->get_input_port(robotB_id));
builder.Connect(plant_states_mux->get_output_port(0),
robots_to_pose->get_input_port());
builder.Connect(robots_to_pose->get_output_port(),
scene_graph.get_source_pose_port(plant.get_source_id().value()));
drake::geometry::ConnectDrakeVisualizer(&builder, scene_graph);
Up until the MultibodyPositionToGeometryPose port, I was able to check that the joint positions were in the correct order. This was done by looking at the function "CalcGeometryPose" and outputting the plant_context_ values on to the terminal.
However, when I run the simulation and send joint position to robot_B, it moves the wrong joints. Additionally, robot_A also moves its joint even though I did not send it any states.
I found out that this was due to reordering of the robot ports (either inside
plant_.get_geometry_poses_output_port().Calc(*plant_context_, output);
or in scene graph)
The ports reorder like this: 7 DOF for every robot's base position (4 quaternion, 3 xyz) is listed first. Then the joints are ordered randomly afterwards.
EX:
robotA qw, robotA qx, robotA qy, robotA qz, robotA x, robotA y, robotA z,
robotB qw, robotB qx, robotB qy, robotB qz, robotB x, robotB y, robotB z,
robotA joint 1, robotA joint 2, robotB joint 1, robotB joint 2, robotA joint 3 ... etc
So, I have thought that sending a state to robotA joint 1 would come right after robotA z position. However, the reordering makes it so that it is robotB qw (for this example).
I have tried to reorder the port wiring to match the joints. But when I do so, the ports switch orders.
My question is, does scenegraph or MultibodyPositionToGeometryPose reorder position state locations? (perhaps for efficiency or something?)
If so, is there a way to prevent this from happening?
Thank you
I haven't looked in detail to your code, but using a multiplexer essentially assumes that all state of robot 2 follows immediately and contiguously the one of robot 1. A fine assumption, but it is not what MBP does. That assumption is not true in general. However, MBP does offer APIs that state some specific ordering.
If you do want the state of a single robot in a vector form, probably best to use model instances. For instance, you can do GetPositionsAndVelocities(cotext, model_instace) which essentially does the multiplexer work above, but we the caveat that MBP does know how states are stored.
Another recommended option is to use the documented public APIs. In general if we document something, we mean it (unless of course there is a bug, but we try). So I recommend you look at things like Joint::position_start() and related methods to find out how things are indexed and possibly build your own map of how you'd like things to be indexed.
Finally, for free bodies, you can say MBP::SetFreeBodyPose() to set a pose (see Body::is_free() and related methods in Body:).
However, it looks like you got rid of MBP (which is the one that knows these mappings) and only care about viz? maybe you could use two MultibodyPositionToGeometryPose, one for each robot?
Hopefully it helps.
In games like StarCraft you can have up to 200 units (for player) in a map.
There are small but also big maps.
When you for example grab 50 units and tell them to go to the other side of the map some algorithm kicks in and they find path through the obsticles (river, hills, rocks and other).
My question is do you know how the game doesnt slow down because you have 50 paths to calculate. In the meantime other things happens like drones collecting minerals buildinds are made and so on. And if the map is big it should be harder and slower.
So even if the algorithm is good it will take some time for 100 units.
Do you know how this works maybe the algorithm is similar to other games.
As i said when you tell units to move you did not see any delay for calculating the path - they start to run to the destination immediately.
The question is how they make the units go through the shortest path but fast.
There is no delay in most of the games (StarCraft, WarCraft and so on)
Thank you.
I guess it just needs to subdivide the problem and memoize the results. Example: 2 units. Unit1 goes from A to C but the shortest path goes through B. Unit2 goes from B to C.
B to C only needs to be calculated once and can be reused by both.
See https://en.m.wikipedia.org/wiki/Dynamic_programming
In this wikipedia page it specifically mentions dijkstra's algorithm for path finding that works by subdividing the problem and store results to be reused.
There is also a pretty good looking alternative here http://www.gamasutra.com/blogs/TylerGlaiel/20121007/178966/Some_experiments_in_pathfinding__AI.php where it takes into account dynamic stuff like obstacles and still performs very well (video demo: https://www.youtube.com/watch?v=z4W1zSOLr_g).
Another interesting technique, does a completely different approach:
Calculate the shortest path from the goal position to every point on the map: see the full explanation here: https://www.youtube.com/watch?v=Bspb9g9nTto - although this one is inefficient for large maps
First of all 100 units is not such a large number, pathfinding is fast enough on modern computers that it is not a big resource sink. Even on older games, optimizations are made to make it even faster, and you can see that unit will sometimes get lost or stuck, which shouldn't really happen with a general algorithm like A*.
If the map does not change map, you can preprocess it to build a set of nodes representing regions of the map. For example, if the map is two islands connected by a narrow bridge, there would be three "regions" - island 1, island 2, bridge. In reality you would probably do this with some graph algorithm, not manually. For instance:
Score every tile with distance to nearest impassable tile.
Put all adjacent tiles with score above the threshold in the same region.
When done, gradually expand outwards from all regions to encompass low-score tiles as well.
Make a new graph where each region-region intersection is a node, and calculate shortest paths between them.
Then your pathfinding algorithm becomes two stage:
Find which region the unit is in.
Find which region the target is in.
If different regions, calculate shortest path to target region first using the region graph from above.
Once in the same region, calculate path normally on the tile grid.
When moving between distant locations, this should be much faster because you are now searching through a handful of nodes (on the region graph) plus a relatively small number of tiles, instead of the hundreds of tiles that comprise those regions. For example, if we have 3 islands A, B, C with bridges 1 and 2 connecting A-B and B-C respectively, then units moving from A to C don't really need to search all of B every time, they only care about shortest way from bridge 1 to bridge 2. If you have a lot of islands this can really speed things up.
Of course the problem is that regions may change due to, for instance, buildings blocking a path or units temporarily obstructing a passageway. The solution to this is up to your imagination. You could try to carefully update the region graph every time the map is altered, if the map is rarely altered in your game. Or you could just let units naively trust the region graph until they bump into an obstacle. With some games you can see particularly bad cases of the latter because a unit will continue running towards a valley even after it's been walled off, and only after hitting the wall it will turn back and go around. I think the original Starcraft had this issue when units block a narrow path. They would try to take a really long detour instead of waiting for the crowd to free up a bridge.
There's also algorithms that accomplish analogous optimizations without explicitly building the region graph, for instance JPS works roughly this way.
I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.