What is the `node_dim` argument referring to in the message passing class? - machine-learning

In the PyTorch geometric tutorial for creating Message Passing Networks they have this paragraph at the start when explaining what the class does:
MessagePassing(aggr="add", flow="source_to_target", node_dim=-2): Defines the aggregation scheme to use ("add", "mean" or "max") and the flow direction of message passing (either "source_to_target" or "target_to_source"). Furthermore, the node_dim attribute indicates along which axis to propagate.
I don't understand what this node_dim is referring to, and why it is -2. I have looked at the documentation for the MessagePassing class and it says there that it is the axis which to propagate -- this still doesn't really clarify what we are doing here and why the default is -2 (presumably that is how you propagate information at a node level). Could someone offer some explanation of this to me please?

After referring to here and here, I think the thing related to it is the output of the 'message' function.
In most cases, the shape of the output is [edge_num, emb_out], and if we set the node_dim as -2, it means that we will aggregate along the edge_num using indices of the target nodes.
This is exactly the process that aggregates the information from source nodes.
The result after aggregation is [node_num, emb_out].

Related

Can you generate a scene graph after a plant has been finalized?

I'm working on a project that requires me to add a model through Parser (which requires the plant to be of the same type as the array used) before Setting the position of the model in said plant and taking distance queries. These queries only work when the query object generated from the scene graph is of type float.
I've run into a problem where setting the position doesn't work due to the array being used being of type AutoDiff. A possible solution would then be converting the plant of type float to Autodiff with plant.ToAutoDiff(), but this only creates a copy of the plant without coupling it to the scene graph (and in turn the query object) from which the queries are derived. Taking queries with a query object generated from the original plant would then fail to reflect the new position passed to the AutoDiff copy.
Is there a way to create a new scene graph from the already finalized symbolic copy of the original plant, so that I can perform the queries with it?
A couple of thoughts:
Don't just convert the plant to autodiff. Convert the whole diagram. That will give you a converted, connected network.
You're stuck with the current workflow. Presumably, your proximity geometries are specified in your parsed file (as <collision> tags). The parsing process is ephemeral. The declaration is consumed, passed through MultibodyPlant into SceneGraph. If there is no SceneGraph at parse time, all knowledge of the declared collision geometry is forgotten.
So, the typical workflow is:
Create a float-valued diagram.
Scalar convert it to an AutoDiff-valued diagram.
Keep both around to serve the different roles.
We don't have a tutorial that directly shows scalar converting an entire diagram, but it's akin to what is shown in this MultibodyPlant-specific tutorial. Just call ToScalarType() on the Diagram root.

How to use minDistanceConstraint in pydrake

In this repo, #Gizatt uses the following command to assemble collision constraints for the kuka iiwa:
ik.MinDistanceConstraint(tree, collision_tol, list(), set())
Here, what do list() and set() signify. Both seem to be empty here.
Let's just say I have an item (item 1) that consists of 6 bodies (within one urdf) and another object (item 2) in my RigidBodyTree that consists of one body (within a separate urdf) and I only want to check for collisions between any of the 6 bodies that make up item 1 and item 2. Is there a way to set this function so that it doesn't check for collisions within the all the bodies in item 1 but only for collisions between item 1 and item 2?
Finally, I currently have the following error when I use this function:
[2018-11-14 19:39:20.812] [console] [warning] Attempting to compute distance between two collision elements, at least one of which is non-convex.
I took #gizatt's advice and converted the meshes of each link within item 1 to convex hulls using meshlab and when I look at each mesh using the visualizer, they all appear to be convex to me. However I still get this error. Is there any other reason this error would pop up?
The documentation for that method is here:
https://drake.mit.edu/pydrake/pydrake.solvers.ik.html?highlight=mindistanceconstraint#pydrake.solvers.ik.MinDistanceConstraint
The last two arguments that greg used are about the "active" bodies. This will help you if you want to ignore some bodies entirely from the collision computation.
If you want to ignore some collision pairs, then use collision filter groups:
https://drake.mit.edu/pydrake/pydrake.multibody.rigid_body_tree.html?highlight=collision%20filter#pydrake.multibody.rigid_body_tree.RigidBodyTree.DefineCollisionFilterGroup
Our RBT/Bullet wrapper assumes every mesh is convex EXCEPT static/anchored geometry. It seems likely that you are getting that warning because you are collision checking against the static/anchored geometry?
FWIW -- the documentation is MUCH more complete on multibodytree vs rigidbodytree, but for this particular query I think you're right to use RBT -- multibody is not quite there yet.
Here, what do list() and set() signify. Both seem to be empty here.
These arguments are historical artifacts and are almost never what you want to use. I would recommend leaving them empty as in the example you provided. The first restricts the constraint to consider only the bodies specified by active_bodies_idx. The second restricts the constraint to consider only the collision groups whose names are contained in active_group_names. Note that the "collision group" concept for the active_group_names argument is not the same as the "collision filter group" concept.
Is there a way to set this function so that it doesn't check
for collisions within the all the bodies in item 1 but only for
collisions between item 1 and item 2?
You were on the right track. You want to add a collision filter group that contains all of the bodies in item 1 and then set that collision group to ignore itself. The following code requires the AddCollisionFilterIgnoreTarget() binding added by PR #10085.
tree.DefineCollisionFilterGroup("1_filtergroup")
tree.AddCollisionFilterGroupMember("1_filtergroup", "base_link", 1_model_id)
# Repeat the previous line for all bodies in item 1.
tree.AddCollisionFilterIgnoreTarget("1_filtergroup", "1_filtergroup")
You can then create a constraint with the desired behavior by calling ik.MinDistanceConstraint(tree, collision_tol, list(), set()).
Finally, I currently have the following error when I use this
function: [2018-11-14 19:39:20.812] [console] [warning] Attempting to
compute distance between two collision elements, at least one of which
is non-convex. ... Is there any other reason this error
would pop up?
As #Russ Tedrake mentioned, all mesh collision elements attached to anchored (welded to the world) bodies are converted to non-convex mesh objects in the Bullet backend, regardless of the convexity of the original mesh. This is unfortunate, but will most likely not be addressed, as all RigidBodyTree-related code is nearing end-of-life. For IK purposes, you can work around this by attaching the mesh to a dummy body that is connected to the world by a prismatic or revolute joint whose upper and lower limits are both 0. That will force the backend to work with the convex-hull of the provide mesh. You'll then need to remove the element corresponding to the dummy joint from the configuration returned by IK.

Element type error being observed

I am using C3D8R element and creating section points across elements. However when doing it, I get the below error
"ELEMENT TYPE C3D8R HAS NO OUTPUT AT SECTION POINT 1. SECTION POINT REMOVED"
Can anyone let me know how should i address this error?
You cannot define section points in this way for solid elements such as C3D8R, since they use a completely different formulation. Casual reading of the documentation will make that clear.
However, you can discretize 3D geometry using "continuum shell" elements (in Abaqus: SC6R, SC8R). These elements have displacement DOF only and the thickness is defined by the nodal geometry directly. You should be sure to read the docs and become familiar with their usage before trusting your analysis results.
As far as I'm aware, you can define continuum shell element section properties in the same way as for conventional shell elements, with a couple of caveats. For example, you cannot specify an offset for the reference surface from the element midsurface when the section properties are specified by one or more material layers.

How to fix Amos error: "observed variable is represented by an ellipse in the path diagram"?

I received the following question by email and have seen a lot of students with this problem:
I am trying to fit a structural equation model in Amos, but when I click "calculate estimates", I get the following error: "observed variable [variable name] is represented by an ellipse in the path diagram". Could you please advise me of what I am doing wrong?
IBM Help discusses this error but isn't that helpful.
In practice, I've seen this error come up a number of times. It can occur because you have incorrectly specified a variable as latent that you wanted to be observed. However, more commonly, it is the result of giving an inappropriate variable to a latent variable. Specifically, it is relatively easy to give a name to a latent factor that is the same as an observed variable in your data file.
For example, one time I had some personality variables in a dataset and the extraversion items were called E1, E2, E3, and so on. These are common names for residuals. So when giving residuals these names, there was a conflict with the names in the data file.
Another even more common cause is when you name a latent factor an appropriate name (e.g., selfesteem, extraversion, jobsatisfaction, etc.) and you have already created a scale score in your data file with the same name. This also causes the conflict.
The basic solution is just to give the latent variable a unique name that doesn't conflict with one in the data file. So for example, name the variable selfesteem_factor rather than selfesteem if you already have a variable called selfesteem.
I recently experienced the same problem. I followed Jeromy's advise and it worked. Actually that error message is caused by you giving the same name to a latent variable and an observed variable. In my case, I had a latent variable, trust, but I had also created a summated scale for trust(making it become an observed variable). So I got the same error message. when I changed the name of the latent variable, the model run properly

Why does ELKI need db.in file in addition to distance matrix? Also what should db.in file contain?

I tried to follow this tutorial on using ELKI with pre-computed distances for clustering.
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
I used the following set of command line options:
-dbc.filter FixedDBIDsFilter -dbc.startid 0 -algorithm clustering.OPTICS
-algorithm.distancefunction external.FileBasedDoubleDistanceFunction
-distance.matrix /path/to/matrix -optics.minpts 5 -resulthandler ResultWriter
ELkI fails with a configuration error saying db.in file is needed to make the computation.
The following configuration errors prevented execution:
No value given for parameter "dbc.in":
Expected: The name of the input file to be parsed.
No value given for parameter "parser.distancefunction":
Expected: Distance function used for parsing values.
My question is what is db.in file? Why should I provide it in addition to the distance matrix file since the pair-wise distance matrix file completely specifies all the information about the point cloud. (also I don't have access to any other information other than the pair-wise distance information).
What should I do about db.in? Should I override it, or specify some dummy information etc. Kindly help me understand.
thank you.
This is documented in the ELKI HowTos:
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
Using without primary data
-dbc DBIDRangeDatabaseConnection -idgen.count 100
However, there is a bug (patch is on the howto page, and will be in the next release) so you right now can't fully use this; as a workaround you can use a text file that enumerates the objects.
The reason for this is that ELKI is designed to work on multi-relational data. It's not just processing matrixes. But some algorithms may e.g. need a geographic representation of an object, some measurements for this object, and a label for evaluation. That is three relations.
What the DBIDRange data source essentially does is create a single "fake" relation that is just the DBIDs 0 to 99. On algorithms that don't need actual data, but only distances (e.g. LOF or DBSCAN or OPTICS), it is sufficient to have object IDs and a distance matrix.

Resources