Matching Joint Actuator Input Dimensions with PID Controller Output Dimensions [Drake] - drake

I am trying to wire a PID controller's output to the input of a Joint Actuator. For context (pun intended), I'm able to wire together the continuous state output port of my platform model, which is just a revolute joint, to the PID controller's input because the dimensions are both size 2. I think I'm conceptually misunderstanding something about the actual data types my ports return. Here's a picture of my diagram context and the associated code:
Diagram context visualized
Code screenshot pt 1
Code screenshot pt 2 with the error
Plant context pt 1
Plant context pt 2 with some more info
Any help is greatly appreciated! The error is in the commented-out line 58 of screenshot 2. The dimensions of controller.get_out_port_control() = 1 (which makes sense), but the dimensions of the input to the actuation input port must be 2 (which I don't understand).

I can't be sure without seeing the sdf files, but when you parse an SDF, you will automatically get a joint actuator for any joint that does not have an effort limit = 0. I suspect that you are getting another actuator that way?

Related

Comsol: Infinite Element Domain module

I want to simulate a 2D heat transfer process in the subsurface on a region which is infinite on the r-direction. So, as you know, the very basic way to model this is to draw a geometry that is very long in the r direction. I have done this, and the results that I obtain is correct as in this case, the results are matched with the analytical solution. As you know, there is a capability in Comsol called infinite element domain which serves the purpose to the problem mentioned above. In this case, we need to define a limited geometry on which we want to solve the PDE, and also need to draw a small domain acting as the Infinite Element Domain. However, in this case, the results are not correct because they are not matched with the analytical solution. Is there anything that I am missing to correctly use Infinite Element Domain in comsol?
Any help or comment would be appreciated.
Edit:
I edited the post to be more specific.
Please consider the following figure where a fluid with high temperature is being injected into a region with lower temperature:
https://i.stack.imgur.com/BQycC.png
The equation to solve is:
https://i.stack.imgur.com/qrZcK.png
With the following initial and boundary conditions (note that the upper and lower boundary condition is no-flux):
https://i.stack.imgur.com/l7pHo.png
We want to obtain the temperature profile over the length of rw<r<140 m (rw is very small and is equal to 0.005 m here) at different times. One way to model this numerically in Comsol is to draw a rectangle that is 2000 m in the r-direction, and get results only in the span of r [rw,140] m:
https://i.stack.imgur.com/BKCOi.png
The results of this case is fine, because they are well-matched with the analytical solution.
Another way to model this is to replace the above geometry with a bounded one that is [rw, 140] m in the r-direction and then augment it with an Infinite Element domain that is meshed mapped, as follows:
https://i.stack.imgur.com/m9ksm.png
Here, I have set the thickness of Infinite Element to 10 m in the r-direction. However, the results in this case are not matched with the analytical solution (or the above case where Infinite Element domain was not used). Is there anything that I am missing in Comsol? I have also changed some variables with regard to Infinite Element in Comsol such as physical width or distance, but I didn't see any changes in the results.
BTW, here are the results:
https://i.stack.imgur.com/cdaPH.png

Does ```/gun/polarization 0. 0. -1.``` in histo.mac of the Pol01 example represent the photon spin?

On page 348 of the geant4 User’s guide and applications manual (refer to following link)
http://ftp.tku.edu.tw/Linux/Gentoo/distfiles/BookForAppliDev-4.10.03.pdf
it states that
"Pol01 - interaction of polarized beam (e.g. circularly polarized photons) with polarized target"
On lines 25 and 26 in the histo.mac file of the Pol01 example it has the following two lines of instructions...
/gun/polarization 0. 0. -1.
/gun/particle gamma
The direction of this gamma beam is along the z-axis, and so, assuming the code is correct, the first line of code cannot be describing the polarization state of the electric field. Am I to take it then that, in this context, the first line defines the photon spin projection, and therefore it is defining a circularly polarized photon, either left or right depending on which convention geant4 uses?
Reading the code, Geant4 treats the three vector you use in /gun/polarization for a photon as the S1,S2,S3 components of a Stokes vector in calculations described in the Wikipedia article below.
https://en.wikipedia.org/wiki/Stokes_parameters
A (0,0,1) vector will represent circularly polarized light.

Does scene_graph.get_source_pose_port() mix up the order of the inputs?

Currently, I am trying to visualize multiple robots in drake. This means that dynamics will not be taken into account of and that MBP will not be a system. MBP is only used to add urdf models into the world and assist for scene graph.
How I removed dynamics and made it purely a visualizer is as follows:
I sent robot joint positions to
builder.AddSystem<MultibodyPositionToGeometryPose<double>>(plant);
and then connected this to
scene_graph.get_source_pose_port(plant.get_source_id().value()));
Part of the code is below:
// Two robots are in the simulations. Their position states will be muxed into one
// in order to send to MultibodyPositionToGeometryPose
// For now, robotA's position states will come from a constant zero vector,
// while robotB's will come from a receiver.
std::vector<int> mux_sizes = { 14,27 };
auto plant_states_mux =
builder.AddSystem<systems::Multiplexer>(mux_sizes);
auto robots_to_pose =
builder.AddSystem<MultibodyPositionToGeometryPose<double>>(plant);
VectorX<double> constant_vector = VectorX<double>::Zero(14);
auto constant_zero_source =
builder.template AddSystem<systems::ConstantVectorSource<double>>(
constant_vector);
builder.Connect(constant_zero_source->get_output_port(),
plant_states_mux->get_input_port(robotA_id));
builder.Connect(robotB_state_receiver->get_position_measured_output_port(),
plant_states_mux->get_input_port(robotB_id));
builder.Connect(plant_states_mux->get_output_port(0),
robots_to_pose->get_input_port());
builder.Connect(robots_to_pose->get_output_port(),
scene_graph.get_source_pose_port(plant.get_source_id().value()));
drake::geometry::ConnectDrakeVisualizer(&builder, scene_graph);
Up until the MultibodyPositionToGeometryPose port, I was able to check that the joint positions were in the correct order. This was done by looking at the function "CalcGeometryPose" and outputting the plant_context_ values on to the terminal.
However, when I run the simulation and send joint position to robot_B, it moves the wrong joints. Additionally, robot_A also moves its joint even though I did not send it any states.
I found out that this was due to reordering of the robot ports (either inside
plant_.get_geometry_poses_output_port().Calc(*plant_context_, output);
or in scene graph)
The ports reorder like this: 7 DOF for every robot's base position (4 quaternion, 3 xyz) is listed first. Then the joints are ordered randomly afterwards.
EX:
robotA qw, robotA qx, robotA qy, robotA qz, robotA x, robotA y, robotA z,
robotB qw, robotB qx, robotB qy, robotB qz, robotB x, robotB y, robotB z,
robotA joint 1, robotA joint 2, robotB joint 1, robotB joint 2, robotA joint 3 ... etc
So, I have thought that sending a state to robotA joint 1 would come right after robotA z position. However, the reordering makes it so that it is robotB qw (for this example).
I have tried to reorder the port wiring to match the joints. But when I do so, the ports switch orders.
My question is, does scenegraph or MultibodyPositionToGeometryPose reorder position state locations? (perhaps for efficiency or something?)
If so, is there a way to prevent this from happening?
Thank you
I haven't looked in detail to your code, but using a multiplexer essentially assumes that all state of robot 2 follows immediately and contiguously the one of robot 1. A fine assumption, but it is not what MBP does. That assumption is not true in general. However, MBP does offer APIs that state some specific ordering.
If you do want the state of a single robot in a vector form, probably best to use model instances. For instance, you can do GetPositionsAndVelocities(cotext, model_instace) which essentially does the multiplexer work above, but we the caveat that MBP does know how states are stored.
Another recommended option is to use the documented public APIs. In general if we document something, we mean it (unless of course there is a bug, but we try). So I recommend you look at things like Joint::position_start() and related methods to find out how things are indexed and possibly build your own map of how you'd like things to be indexed.
Finally, for free bodies, you can say MBP::SetFreeBodyPose() to set a pose (see Body::is_free() and related methods in Body:).
However, it looks like you got rid of MBP (which is the one that knows these mappings) and only care about viz? maybe you could use two MultibodyPositionToGeometryPose, one for each robot?
Hopefully it helps.

How to use minDistanceConstraint in pydrake

In this repo, #Gizatt uses the following command to assemble collision constraints for the kuka iiwa:
ik.MinDistanceConstraint(tree, collision_tol, list(), set())
Here, what do list() and set() signify. Both seem to be empty here.
Let's just say I have an item (item 1) that consists of 6 bodies (within one urdf) and another object (item 2) in my RigidBodyTree that consists of one body (within a separate urdf) and I only want to check for collisions between any of the 6 bodies that make up item 1 and item 2. Is there a way to set this function so that it doesn't check for collisions within the all the bodies in item 1 but only for collisions between item 1 and item 2?
Finally, I currently have the following error when I use this function:
[2018-11-14 19:39:20.812] [console] [warning] Attempting to compute distance between two collision elements, at least one of which is non-convex.
I took #gizatt's advice and converted the meshes of each link within item 1 to convex hulls using meshlab and when I look at each mesh using the visualizer, they all appear to be convex to me. However I still get this error. Is there any other reason this error would pop up?
The documentation for that method is here:
https://drake.mit.edu/pydrake/pydrake.solvers.ik.html?highlight=mindistanceconstraint#pydrake.solvers.ik.MinDistanceConstraint
The last two arguments that greg used are about the "active" bodies. This will help you if you want to ignore some bodies entirely from the collision computation.
If you want to ignore some collision pairs, then use collision filter groups:
https://drake.mit.edu/pydrake/pydrake.multibody.rigid_body_tree.html?highlight=collision%20filter#pydrake.multibody.rigid_body_tree.RigidBodyTree.DefineCollisionFilterGroup
Our RBT/Bullet wrapper assumes every mesh is convex EXCEPT static/anchored geometry. It seems likely that you are getting that warning because you are collision checking against the static/anchored geometry?
FWIW -- the documentation is MUCH more complete on multibodytree vs rigidbodytree, but for this particular query I think you're right to use RBT -- multibody is not quite there yet.
Here, what do list() and set() signify. Both seem to be empty here.
These arguments are historical artifacts and are almost never what you want to use. I would recommend leaving them empty as in the example you provided. The first restricts the constraint to consider only the bodies specified by active_bodies_idx. The second restricts the constraint to consider only the collision groups whose names are contained in active_group_names. Note that the "collision group" concept for the active_group_names argument is not the same as the "collision filter group" concept.
Is there a way to set this function so that it doesn't check
for collisions within the all the bodies in item 1 but only for
collisions between item 1 and item 2?
You were on the right track. You want to add a collision filter group that contains all of the bodies in item 1 and then set that collision group to ignore itself. The following code requires the AddCollisionFilterIgnoreTarget() binding added by PR #10085.
tree.DefineCollisionFilterGroup("1_filtergroup")
tree.AddCollisionFilterGroupMember("1_filtergroup", "base_link", 1_model_id)
# Repeat the previous line for all bodies in item 1.
tree.AddCollisionFilterIgnoreTarget("1_filtergroup", "1_filtergroup")
You can then create a constraint with the desired behavior by calling ik.MinDistanceConstraint(tree, collision_tol, list(), set()).
Finally, I currently have the following error when I use this
function: [2018-11-14 19:39:20.812] [console] [warning] Attempting to
compute distance between two collision elements, at least one of which
is non-convex. ... Is there any other reason this error
would pop up?
As #Russ Tedrake mentioned, all mesh collision elements attached to anchored (welded to the world) bodies are converted to non-convex mesh objects in the Bullet backend, regardless of the convexity of the original mesh. This is unfortunate, but will most likely not be addressed, as all RigidBodyTree-related code is nearing end-of-life. For IK purposes, you can work around this by attaching the mesh to a dummy body that is connected to the world by a prismatic or revolute joint whose upper and lower limits are both 0. That will force the backend to work with the convex-hull of the provide mesh. You'll then need to remove the element corresponding to the dummy joint from the configuration returned by IK.

iOS Import .obj file to Model I/O without duplicating vertices

I'm trying to import a .obj file to use in Scene Kit using the Model I/O framework. I initially used the simple MDLAsset initWithURL: function, but after transferring the mesh to a SCNGeometry, I realized this function was triangulizing the mesh, such that each face had 3 unique vertices, and there were separate vertices at the same location for border faces. This was causing some major problems with my other functions, so I tried to fix it by instead using the MDLAsset initWithURL:vertexDescriptor:bufferAllocator:preserveTopology function with preserveTopology set to YES with the descriptor/allocator set to the default with nil. This preserving topology fixed my problem of duplicating vertices, so the faces/edges were all good, but in the process I lost the normals data.
By lost the normals, I don't mean multiple indexing, I mean after setting preserveTopology to YES, the buffer did not contain any normals values at all. Whereas before it was v1/n1/v2/n2... and the stride was 24 bytes (3 dimensions *4 bytes/float * 2 attributes), now the first half of the buffer is v1/v2/... with a stride of 12 and the entire 2nd half of the buffer is just 0.0 floats.
Also something weird with this, when you look at the SCNGeometrySources of the Geometry, there are 2 sources, 1 with semantic kGeometrySourceSemanticVertex, and 1 with semantic kGeometrySourceSemanticNormal. You would think that the semantic vertex source would contain the position data, and the semantic normal source would contain the normal data. However that is not the case. No matter what you set preserveTopology, they are buffers of size to contain both position and normal data with identical values. So when I said before there was no normal data, I mean both of these buffers, semantic vertex AND semantic normal went from being v1/n1/v2/n2... to v1/v2/.../(0.0, 0.0, 0.0)/(0.0, 0.0, 0.0)/... I went into the mdlmesh's buffer (before the transfer to scene kit) at found the same problem, so the problem must be with the initWithURL, not with the model i/o to scenekit bridge.
So I figured there must be something wrong with the default vertex descriptor and buffer allocator (since I was using nil) and went about trying to create my own that matched these 2 possible data formats. Alas after much trying I was unable to get something that worked.
Any ideas on how I should do this? How to give MDLAsset the proper vertexDescriptor and bufferAllocator (I feel like nil should be ok here) for importing a .obj file? Thanks
An obj file with vertices and normals has vertices, indicated by v lines, normals, indicated by vn lines, and faces, indicated by f lines.
The v and vn lines will just be the floating point values you expect, and the f line will be of the form -
f v0//n0 v1//n1 etc
Since OpenGL and Metal don't allow multiple indexing, you'll see the first effect of vertices being duplicated. For example,
f 0//0 1//2 2//0
can't work as a vertex buffer because it would require different indices per vertex. So typical OBJ parsers have to create new vertices that allow the face to become
f 0//0 1//1 2//2
The preserve topology option doesn't help you. It preserves the connectivity and shape of the mesh (no triangulation occurs, shared edges remain shared) but it still enforces a single index per vertex component.
One solution would be to make sure that your tool that is outputting the OBJ files uses single indexing during export, if that is an option.
Another option, and this won't solve the problem immediately, would be file a request that multiple-indexing be supported at the Model I/O level. SceneKit would still have to uniquely-index because it is has to be able to render.
Another option would be to use a format like PLY that doesn't have multiple indexing.

Resources