Can I visualize a Multibody pose without explicitly calculating every body's full transform? - drake

In the examples/quadrotor/ example, a custom QuadrotorPlant is specified and its output is passed into QuadrotorGeometry where the QuadrotorPlant state is packaged into FramePoseVector for the SceneGraph to visualize.
The relevant code segment in QuadrotorGeometry that does this:
...
builder->Connect(
quadrotor_geometry->get_output_port(0),
scene_graph->get_source_pose_port(quadrotor_geometry->source_id_));
...
void QuadrotorGeometry::OutputGeometryPose(
const systems::Context<double>& context,
geometry::FramePoseVector<double>* poses) const {
DRAKE_DEMAND(frame_id_.is_valid());
const auto& state = get_input_port(0).Eval(context);
math::RigidTransformd pose(
math::RollPitchYawd(state.segment<3>(3)),
state.head<3>());
*poses = {{frame_id_, pose.GetAsIsometry3()}};
}
In my case, I have a floating based multibody system (think a quadrotor with a pendulum attached) of which I've created a custom plant (LeafSystem). The minimal coordinates for such a system would be 4 (quaternion) + 3 (x,y,z) + 1 (joint angle) = 7. If I were to follow the QuadrotorGeometry example, I believe I would need to specify the full RigidTransformd for the quadrotor and the full RigidTransformd of the pendulum.
Question
Is it possible to set up the visualization / specify the pose such that I only need to specify the 7 (pose of quadrotor + joint angle) state minimal coordinates and have the internal MultibodyPlant handle the computation of each individual body's (quadrotor and pendulum) full RigidTransform which can then be passed to the SceneGraph for visualization?
I believe this was possible with the "attic-ed" (which I take to mean "to be deprecated") RigidBodyTree, which was accomplished in examples/compass_gait
lcm::DrakeLcm lcm;
auto publisher = builder.AddSystem<systems::DrakeVisualizer>(*tree, &lcm);
publisher->set_name("publisher");
builder.Connect(compass_gait->get_floating_base_state_output_port(),
publisher->get_input_port(0));
Where get_floating_base_state_output_port() was outputting the CompassGait state with only 7 states (3 rpy + 3 xyz + 1 hip angle).
What is the MultibodyPlant, SceneGraph equivalent of this?
Update (Using MultibodyPositionToGeometryPose from Russ's deleted answer
I created the following function which, attempts to create a MultibodyPlant from the given model_file and connects the given plant pose_output_port through MultibodyPositionToGeometryPose.
The pose_output_port I'm using is the 4(quaternion) + 3(xyz) + 1(joint angle) minimal state.
void add_plant_visuals(
systems::DiagramBuilder<double>* builder,
geometry::SceneGraph<double>* scene_graph,
const std::string model_file,
const systems::OutputPort<double>& pose_output_port)
{
multibody::MultibodyPlant<double> mbp;
multibody::Parser parser(&mbp, scene_graph);
auto model_id = parser.AddModelFromFile(model_file);
mbp.Finalize();
auto source_id = *mbp.get_source_id();
auto multibody_position_to_geometry_pose = builder->AddSystem<systems::rendering::MultibodyPositionToGeometryPose<double>>(mbp);
builder->Connect(pose_output_port,
multibody_position_to_geometry_pose->get_input_port());
builder->Connect(
multibody_position_to_geometry_pose->get_output_port(),
scene_graph->get_source_pose_port(source_id));
geometry::ConnectDrakeVisualizer(builder, *scene_graph);
}
The above fails with the following exception
abort: Failure at multibody/plant/multibody_plant.cc:2015 in get_geometry_poses_output_port(): condition 'geometry_source_is_registered()' failed.

So, there's a lot in here. I have a suspicion there's a simple answer, but we may have to converge on it.
First, my assumptions:
You've got an "internal" MultibodyPlant (MBP). Presumably, you also have a context for it, allowing you to perform meaningful state-dependent calculations.
Furthermore, I presume the MBP was responsible for registering the geometry (probably happened when you parsed it).
Your LeafSystem will directly connect to the SceneGraph to provide poses.
Given your state, you routinely set the state in the MBP's context to do that evaluation.
Option 1 (Edited):
In your custom LeafSystem, create the FramePoseVector output port, create the calc callback for it, and inside that callback, simply invoke the Eval() of the pose output port of the internal MBP that your LeafSystem own (having previously set the state in your locally owned Context for the MBP and passing in the pointer to the FramePoseVector that your LeafSystem's callback was provided with).
Essentially (in a very coarse way):
MySystem::MySystem() {
this->DeclareAbstractOutputPort("geometry_pose",
&MySystem::OutputGeometryPose);
}
void MySystem::OutputGeometryPose(
const Context& context, FramePoseVector* poses) const {
mbp_context_.get_mutable_continuous_state()
.SetFromVector(my_state_vector);
mbp_.get_geometry_poses_output_port().Eval(mpb_context_, poses);
}
Option 2:
Rather than implementing a LeafSystem that has an internal plant, you could have a Diagram that contains an MBP and exports the MBP's FramePoseVector output directly through the diagram to connect.

This answer addresses, specifically, your edit where you are attempting to use the MultibodyPositionToGeometryPose approach. It doesn't address the larger design issues.
Your problem is that the MultibodyPositiontToGeometryPose system takes a reference to an MBP and keeps a reference to that same MBP. That means the MBP must be alive and well for at least as long as the MPTGP is. However, in your code snippet, your MBP is local to the add_plant_visuals() function so it is destroyed as soon as the function is over.
You'll need to create something that is persisted and owned by someone else.
(This is tightly related to my option 2 - now edited for improved clarity.)

Related

Best way to ignore your own changes using an Aurelia Property Observer

I'm hoping someone can offer some guidance.
It is about making use of the Aurelia Observers in the best possible manner.
The example is a made-up example to illustrate the perceived problem.
Let's say I have three number properties: hours, mins, secs aka H, M, S and I wish to build an object that monitors H, M, S and each time any of these three changes, it produces a property T which is string that looks like a duration eg. "hh:mm:ss" using the H, M, S properties with appropriate zero padding etc.
Conversely, the property T is bound to in the user interface and therefore if the user modifies T (eg. via an input field), I need to decompose the pieces of T into their H, M, S components and push the individual values back into the H, M, S properties.
It should not be lost upon anyone that this behaviour is remarkably similar to the Aurelia value converter behaviour in terms of "toView" and "fromView" behaviour, except that we are dealing with 3 properties on 1 side, and 1 property on the other side.
Implementation-wise, I can use the BindingEngine or ObserverLocator to create observers on the H, M, S and T properties, and subscribe to those change notifications. Easy peasy for that bit.
My question is ultimately this:
When H changes, that will trigger a recompute of T, therefore I will write a new value to T.
Changing T will trigger a change notification for T, whose handler will then decompose the value of T and write new values back to H, M, S.
Writing back to H, M, S triggers another change notification to produce a new T, and so forth.
It seems self-evident that it is - at the very least - DESIRABLE that when my "converter" object writes to the H, M, S, T properties then it should prepare itself in some way to ignore the change notification that it expects will be forthcoming as a consequence.
However, [the ease of] saying it and doing it are two different things.
My question is -- is it really necessary at all? If it is desirable, how do I go about doing it in the easiest manner possible. The impediments to doing it are as follows:
There must be a real change to a value to produce a change notification, so you need to know in advance whether you are "expecting" to receive one.
The much more difficult issue is that the change notifications are issued via the Aurelia "synthetic" micro-task queue, so you are very much unaware when you will receive that micro-task call.
So, is there a good solution to this problem? Does anyone actually worry about this, or do they simply rely on Point 1 producing a self-limiting outcome? That is, there might be a couple of cycles of change notifications but the process will eventually acquiesce?
The simplest way to implement an N-way binding (in this case 4-way) is by using change handlers combined with "ignore" flags to prevent infinite recursion.
The concept below is quite similar to how some of these problems are solved internally in the Aurelia framework - it is robust, performant, and about as "natural" as it gets.
#autoinject()
export class MyViewModel {
#bindable({ changeHandler: "hmsChanged" })
public h: string;
#bindable({ changeHandler: "hmsChanged" })
public m: string;
#bindable({ changeHandler: "hmsChanged" })
public s: string;
#bindable()
public time: string;
private ignoreHMSChanged: boolean;
private ignoreTimeChanged: boolean;
constructor(private tq: TaskQueue) { }
public hmsChanged(): void {
if (this.ignoreHMSChanged) return;
this.ignoreTimeChanged = true;
this.time = `${this.h}:${this.m}:${this.s}`;
this.tq.queueMicroTask(() => this.ignoreTimeChanged = false);
}
public timeChanged(): void {
if (this.ignoreTimeChanged) return;
this.ignoreHMSChanged = true;
const hmsParts = this.time.split(":");
this.h = hmsParts[0];
this.m = hmsParts[1];
this.s = hmsParts[2];
this.tq.queueMicroTask(() => this.ignoreHMSChanged = false);
}
}
If you need a generic solution you can reuse across multiple ViewModels where you have different kinds of N-way bindings, you could do so with a BindingBehavior (and possibly even the combination of 2 ValueConverters).
But the logic that needs to be implemented in those would be conceptually similar to my example above.
In fact, if there were a premade solution to this problem in the framework (say, for example, a setting of the #bindable() decorator) then the internal logic of that solution would also be similar. You'll always need those flags as well as to delay their resetting via a micro task.
The only way to avoid using a micro task would be to change some internals of the framework such that a context can be passed along which can tell the dispatcher of change notifications "this change came from a change handler, so don't trigger that change handler again"

Implementing Jena Dataset provider over MongoDB

I have started to implement a set of classes that provide a direct interface to MongoDB for persistence, similar in spirit to the now-unmaintained SDB persistor implementation for RDBMS.
I am using the time-honored technique of creating the necessary concrete classes from the interfaces and doing a println in each method, therein allowing me to trace the execution. I have gotten all the way to where the engine is calling out to my cursor set up:
public ExtendedIterator<Triple> find(Node s, Node p, Node o) {
System.out.println("+++ MongoGraph:extenditer:find(" + s + p + o + ")");
// TBD Need to turn s,p,o into a match expression! Easy!
MongoCursor cur = this.coll.find().iterator();
ExtendedIterator<Triple> curs = new JenaMongoCursorIterator(cur);
return curs;
}
Sadly, when I later call this:
while(rs.hasNext()) {
QuerySolution soln = rs.nextSolution() ;
System.out.println(soln);
}
It turns out rs.hasNext() is always false even though material is present in the MongoCursor (I can debug-print it in the find() method). Also, the trace print in the next() function in my concrete iterator JenaMongoCursorIterator (which extends NiceIterator which I believe is OK) is never hit. In short, the basic setup seems good but then the engine never cranks the iterator on find()
Trying to use SDB as a guide is completely overwhelming for someone not intimately familiar with the software architecture. It's fully factored and filled with interfaces and factories and although that is excellent, it is difficult to nav.
Has anyone tried to create their own persistor implementation and if so, what are the basic steps to getting a "hello world" running? Hello World in this case is ANY implementation, non-optimized, that can call next() on something to produce a Triple.
TLDR: It is now working.
I was coding too eagerly and JenaMongoCursorIterator contained a method hasNexl which of course did not override hasNext (with a t ) in the default implementation of NiceIterator which returns false.
This is the sort of problem that eclipse and visual debugging and tracing makes a lot easier to resolve than regular jdb. jdb is fine if you know the software architecture pretty well but if you don't, the multiple open source files and being able to mouse over vars and such provides a tremendous boost in the amount of context that can be created to home in on the problem.

State garbage collection in Beam with GlobalWindow

Apache Beam has recently introduced state cells, through StateSpec and the #StateId annotation, with partial support in Apache Flink and Google Cloud Dataflow.
I cannot find any documentation on what happens when this is used with a GlobalWindow. In particular, is there a way to have a "state garbage collection" mechanism to get rid of states for keys that have not been seen for a while according to some configuration, while still maintaining a single all-time state for keys are that seen frequently enough?
Or, is the amount of state used in this case going to diverge, with no way to ever reclaim state corresponding to keys that have not been seen in a while?
I am also interested in whether a potential solution would be supported in either Apache Flink or Google Cloud Dataflow.
Flink and direct runners seem to have some code for "state GC" but I am not really sure what it does and whether it is relevant when using a global window.
State can be automatically garbage collected by a Beam runner at some point after a window expires - when the input watermark exceeds the end of the window by the allowed lateness, so all further input is droppable. The exact details depend on the runner.
As you correctly determined, the Global window may never expire. Then this automatic collection of state will not be invoked. For bounded data, including drain scenarios, it actually will expire, but for a perpetual unbounded data source it will not.
If you are doing stateful processing on such data in the Global window you can use user-defined timers (used through #TimerId, #OnTimer, and TimerSpec - I haven't blogged about these yet) to clear state after some timeout of your choosing. If the state represents an aggregation of some sort, then you'll want a timer anyhow to make sure your data is not stranded in state.
Here is a quick example of their use:
new DoFn<Foo, Baz>() {
private static final String MY_TIMER = "my-timer";
private static final String MY_STATE = "my-state";
#StateId(MY_STATE)
private final StateSpec<ValueState<Bizzle>> =
StateSpec.value(Bizzle.coder());
#TimerId(MY_TIMER)
private final TimerSpec myTimer =
TimerSpecs.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
ProcessContext c,
#StateId(MY_STATE) ValueState<Bizzle> bizzleState,
#TimerId(MY_TIMER) Timer myTimer) {
bizzleState.write(...);
myTimer.setForNowPlus(...);
}
#OnTimer(MY_TIMER)
public void onMyTimer(
OnTimerContext context,
#StateId(MY_STATE) ValueState<Bizzle> bizzleState) {
context.output(... bizzleState.read() ...);
bizzleState.clear();
}
}
There is not automatic garbage collection of state if you use GlobalWindows. Only if you use some non-global window will state be garbage collected after the watermark passes the end of a window plus the allowed lateness.
What you can do if you must work with GlobalWindows is to manually keep as state the last update timestamp. Then you would periodically set a timer where you check this timestamp against the current time and delete state if necessary. You would set this timer when encountering a key for the first time (which you can see from the absence of your timestamp state) and then re-set it in the #OnTimer method.

Is it possible to keep track of the state when processing logs by DataFlow?

Is it possible to have the DataFlow process maintain the state. There are log processing tools that allow for that by providing fast access (propriety / in-memory) files available for the real time process to keep track of the state on the logs while processing them.
A use case example would be with tracking registration steps taken by users. The registration steps would come in different logs and the data form those logs would be assembled by the real time process into one final database record (for each registered user) that is written to a database.
Can my DataFLow code keep track of the many registration steps (streaming input) by users and once user's registration steps are completed then have the DataFLow process write the records to the database (one record per user).
I don't know much about DataFlow architecture. It must be using some (proprietary / in-memory nosql) data storage for keeping track of things it needs to keep track of (ex. when it tries to produce top 100 customers). Is that fast access data storage also available to the DataFlow processes to use?
Thanks
As danielm said, state is not yet exposed. The good news is you may not need it for your use case.
If you have a PCollection<KV<UserId, LogEvent>> you can use a CombineFn and Combine.perKey to take all of the LogEvents for a specific UserId and combine them into a single output. The CombineFn tells Dataflow how to create an accumulator, update it by incorporating input elements, and then extract a final output. Transforms like Top actually use a CombineFn (with a Heap as the accumulator) rather than an actual state API.
If your events are of different types, you can still do something like this. For instance, if you have two logs, you can do:
PCollection<KV<UserId, LogEvent1>> events1 = ...;
PCollection<KV<UserId, LogEvent2>> events2 = ...;
// Create tuple tags for the value types in each collection.
final TupleTag<LogEvent1> tag1 = new TupleTag<LogEvent1>();
final TupleTag<LogEvent2> tag2 = new TupleTag<LogEvent2>();
//Merge collection values into a CoGbkResult collection
PCollection<KV<UserIf, CoGbkResult>> coGbkResultCollection =
KeyedPCollectionTuple.of(tag1, pt1)
.and(tag2, pt2)
.apply(CoGroupByKey.<UserId>create());
// Access results and do something.
PCollection<T> finalResultCollection =
coGbkResultCollection.apply(ParDo.of(
new DoFn<KV<K, CoGbkResult>, T>() {
#Override
public void processElement(ProcessContext c) {
KV<K, CoGbkResult> e = c.element();
// Get all LogEvent1 values
Iterable<LogEvent1> event1s = e.getValue().getAll(tag1);
// There will only be one LogEvent2
LogEvent2 event2 = e.getValue().getOnly(tag2);
... Do Something to compute T ....
c.output(...some T...);
}
}));
The above example was adapted from docs on CoGroupByKey which have information.
Dataflow does not currently expose the underlying state mechanism that it uses. However, this is definitely on the radar for a future update.

ELKI DBSCAN R* tree index

In MiniGUi, I can see db.index. How do I set it to tree.spatial.rstarvariants.rstar.RStartTreeFactory via Java code?
I have implemented:
params.addParameter(AbstractDatabase.Parameterizer.INDEX_ID,tree.spatial.rstarvariants.rstar.RStarTreeFactory);
For the second parameter of addParameter() function tree.spatial...RStarTreeFactory class not found
// Setup parameters:
ListParameterization params = new ListParameterization();
params.addParameter(
FileBasedDatabaseConnection.Parameterizer.INPUT_ID,
fileLocation);
params.addParameter(AbstractDatabase.Parameterizer.INDEX_ID,
RStarTreeFactory.class);
I am getting NullPointerException. Did I use RStarTreeFactory.class correctly?
The ELKI command line (and MiniGui; which is a command line builder) allow to specify shorthand class names, leaving out the package prefix of the implemented interface.
The full command line documentation yields:
-db.index <object_1|class_1,...,object_n|class_n>
Database indexes to add.
Implementing de.lmu.ifi.dbs.elki.index.IndexFactory
Known classes (default package de.lmu.ifi.dbs.elki.index.):
-> tree.spatial.rstarvariants.rstar.RStarTreeFactory
-> ...
I.e. for this parameter, the class prefix de.lmu.ifi.dbs.elki.index. may be omitted.
The full class name thus is:
de.lmu.ifi.dbs.elki.index.tree.spatial.rstarvariants.rstar.RStarTreeFactory
or you just type RStarTreeFactory, and let eclipse auto-repair the import:
params.addParameter(AbstractDatabase.Parameterizer.INDEX_ID,
RStarTreeFactory.class);
// Bulk loading static data yields much better trees and is much faster, too.
params.addParameter(RStarTreeFactory.Parameterizer.BULK_SPLIT_ID,
SortTileRecursiveBulkSplit.class);
// Page size should fit your dimensionality.
// For 2-dimensional data, use page sizes less than 1000.
// Rule of thumb: 15...20 * (dim * 8 + 4) is usually reasonable
// (for in-memory bulk-loaded trees)
params.addParameter(AbstractPageFileFactory.Parameterizer.PAGE_SIZE_ID, 300);
See also: Geo Indexing example in the tutorial folder.

Resources