Updating the drake system states from robot hardware pose during initialization - drake

I have been trying to set up a custom manipulation station with Kuka IIWA hardware in drake. I got the hardware interface working. When running a joint teleoperation code (adapted from drake/examples/manipulation_station/joint_teleop.py), the robot jerks violently (all joints tries to move to 0 position) at first and then continues to operate normally. On digging deeper, I found that this is caused by the FirstOrderLowPassFilter system. While advancing the simulation a tiny bit (simulator.AdvanceTo(1e-6)) to evaluate the LCM messages to set the initial GUI sliders-filter_initial_output_value-plant joint positions etc., to match the hardware, the FirstOrderLowPassFilter outputs a momentary value of 0. This sets the IIWA_COMMAND position to zero for an instance and causes a jerk.
How can I avoid this behavior?.
As a workaround, I am subscribing separately to the raw LCM message from the hardware, before initializing the drake systems and sets the filter_initial_output_value before advancing the simulation. Is this the recommended way?.

I think what you're doing (manually reading the LCM message) is fine.
In the alternative, look how a DiscreteDerivative offers the suppress_initial_transient = true option. Perhaps we could add a similar option (via unrestricted update event) to FirstOrderLowPassFilter so that the initial output value was sampled from the input at t == 0. But the event sequencing of startup may still be difficult. We essentially need to initialize the systems in their dataflow order, including refreshing output ports as events fire, which is not natively supported.
In another alternative, perhaps we could configure the IIWA_COMMAND publisher to not publish at t == 0, instead publishing only t >= 0.005.

FirstOrderLowPassFilter has a method to set the initial value. https://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_first_order_low_pass_filter.html#aaef7539cfbf1acfa0cf487c371bc5360
It is used in the example that you copied from:
https://github.com/RobotLocomotion/drake/blob/master/examples/manipulation_station/joint_teleop.py#L146

Related

Initialize `ManipulationStationHardwareInterface` with lcm message

I am trying to use drake to control a kuka iiwa robot and use the ManipulationStationHardwareInterface for Lcm communication. Before deployment on the real robot, I use mock_station_simulation to test. One thing that I find is that after initializing the simulator (which I think should trigger the initialization event?), Eval HardwareInterface's output will give the default values instead of the current lcm message values. For example,
drake::systems::DiagramBuilder<double> builder;
auto *interface = builder.AddSystem<ManipulationStationHardwareInterface>();
interface->Connect();
auto diagram = builder.Build();
drake::systems::Simulator<double> simulator(*diagram);
auto &simulator_context = simulator.get_mutable_context();
auto &interface_context = interface->GetMyMutableContextFromRoot(&simulator_context);
interface->GetInputPort("iiwa_position").FixValue(&interface_context, current_position);
simulator.set_publish_every_time_step(false);
simulator.set_target_realtime_rate(1.0);
simulator.Initialize();
auto q = interface->GetOutputPort("iiwa_position_measured").Eval(interface_context);
std::cout << "after initialization, interface think that the position of robot is" << q << std::endl;
q will be a zero vector.
This behavior bothers me when I try to use the robot_state input port of DifferentialInverseKinematicsIntegrator. DifferentialInverseKinematicsIntegrator will use this q to initialize its internal state rather than the robot's real position. The robot will move violently. As a workaround, I need to read the robot's position first and use the SetPositions method of DifferentialInverseKinematicsIntegrator and do not connect the robot_state input port. Another thing is that LogVectorOutput will always have the default value as the first entry, which is of little use.
I think this problem should be related with the LcmSubscriberSystem. My question is that is it possible to use the Lcm message to initialize the system rather than using the default value?
Thank you.
This is an interesting (and very reasonable) question. I could imagine having an initialization event for the LcmSubscriber that blocks until the first message to arrive. But currently I don't believe that we guarantee the order of the initialization events in a Diagram (likely the order is determined by something like the order the system was added to the Diagram, and we don't have a nice mechanism for setting it). It's possible that the diff IK block could initialize before the LcmSubscriber.
In this case, I think it might be better to capture the first LcmSubscriber message yourself outside the simulation loop, and manually set the diff IK integrator initial state. Then start the simulation.
I'll see if I can get some of the other Drake developers to weigh in.

OPENCV OPENVINO cv2.rectangle

I am using opencv and openvino and am trying to figure out when I have a face detected, use the cv2.rectangle and have my coordinates sent but only on the first person bounded by the box so it can move the motors because when it sees multiple people it sends multiple coordinates and thus causing the servo and stepper motors to go crazy. Any help would be appreciated. Thank you
Generally, each code would run line by line. You'll need to create a proper function for each scenario so that the data could be handled and processed properly. In short, you'll need to implement error handling and data handling (probably more than these, depending on your software/hardware design). If you are trying to implement multiple threads of executions at the same time, it is better to use multithreading.
Besides, you are using 2 types of motors. Simply taking in all data is inefficient and prone to cause missing data. You'll need to be clear about what servo motor and stepper motor tasks are, the relations between coordinates, who will trigger what, if something fails or some sequence is missing then do task X, etc.
For example, the sequence of Data A should produce Result A but it is halted halfway because Data B went into the buffer and interfered with Result A and at the same time screwed Result B which was anticipated to happen. (This is what happened in your program)
It's good to review and design your whole process by creating a coding flowchart (a diagram that represents an algorithm). It will give you a clear idea of what should happen for each sequence of code. Then, design a proper handler for each situation.
Can you share more insights of your (pseudo-)code, please?
It sounds easy - you trigger a face-detection inference-request and you get a list/vector with all detected faces (the region-of-interest for each detected face) (including false-positive and false-positives, requiring some consistency-checks to filter those).
If you are interested in the first detected face only - then it could be to just process the first returned result from the list/vector.
However, you will see that sometimes the order of results might change, i.e. when 2 faces A and B were detected, in the next run it could still return faces, but B first and then A.
You could add object-tracking on top of face-detection to make sure you always process the same face.
(But even that could fail sometimes)

Computing the generalized forces arising due to ExternallyAppliedSpatialForces in Drake, e.g. due to Propeller

I am working with a diagram which includes a MultiBodyPlant with a Propeller connected to it. The Propeller actually realizes numerous physical propellers which are distributed among the bodies of the MultiBodyPlant.
I am able to simulate the dynamics of the combined system by setting the prop forces with FixValue, so I'm on the right track.
What I'd like to be able to do is, given a configuration for the system (i.e. the MultiBodyPlant context) and a chosen propeller command, compute the generalized forces acting on the system. My sense is that this is not immediately available since the simulation is actually using the RNEA, and so does not aggregate the forces all together in that way. For what I'm doing (and even just as a sanity check), I would like to compute the forces directly, not just their effect on the evolution of the state.
Is there an existing method to compute this built into Drake, or should I compute it manually using the spatial jacobian of each propeller frame and the applied SpatialForce of the corresponding propeller? (Something along the lines of this question: How to get the matrix that maps external forces to generalized forces?)
Many thanks for your help.
I understand better now. It's a very reasonable request! You have two systems at play: the Propeller and MultibodyPlant. Unfortunately, the quantity you want is all of Propeller and just a piece of MultibodyPlant. We don't offer direct access to the B(q) matrix in that case.
What you can do is call with either AutoDiffXd or symbolic::Expression, call CalcImplicitTimeDerivativesResidual on the Diagram to get the entire dynamics in implicit form (to avoid taking M(q) inverse). You could call it twice -- once with the Propeller inputs set up via FixValue as AutoDiffXd and/or symbolic::Variable and again with them as zero, then subtract the difference.
Note: CalcImplicitTimeDerivativesResidual is relatively new; I haven't pushed the python bindings for it yet (but it's been on my list). Do you need it from python?
I think that perhaps you are looking for the MultibodyPlant reaction_forces output port?

Trajectory Optimization with Direct Transcription using Generalized Forces instead of Acutator Torques

I am trying to perform trajectory optimization for a robot arm attached to a free-floating base (to simulate a spacecraft-robot arm scenario).
For the base-torque actuation, I had previously tried attaching pseudo-links with actuators to the base CoM but these approaches did not work (see this). In the end, I decided to use the generalized forces to control the system instead of actuators as they seem to better represent my use of space-robots (see here). For linearization, I manually change the B (actuation) matrix to include the base torque actuators (reaction wheels).
With this background, I am trying to perform trajectory optimization using Direct Transcription where I aim to find u(t) which in my case is generalized forces instead of actuator forces. I add constraints that make certain generalized forces zero (translation forces on Base CoM) to better represent the real system (reaction wheels).
Now, when I use DirectTransciption function for the input_port_index parameter I provide input_port_index=plant.get_applied_generalized_force_input_port().get_index() to find trajectory using generalized forces instead of actuation forces.
However, when running Solve(), I get the following error:
RuntimeError: Actuation input port for model instance planarSC must be connected.
The error makes sense as indeed I do not connect the actuation input port. But the error still stays after I use plant.get_actuation_input_port().FixValue(plant.CreateDefaultContext(), np.zeros((plant.num_actuators(),1))) to set the actuation input port to zero.
Is there a way to perform DirectTranscription on generalized forces as u(t) instead of the actuation forces?
Update: I've corrected the context being provided to DirectTranscription. The python script now has the following code but still gives the same error about not having the actuation port connected:
trajOptContext = plant.CreateDefaultContext()
jointAcutation = np.zeros((plant.num_actuators(),1))
plant.get_actuation_input_port().FixValue(trajOptContext, jointAcutation)
prog = DirectTranscription(plant, trajOptContext, N,
input_port_index=plant.get_applied_generalized_force_input_port().get_index())

Controlling the phase of signal in pure data

I'm in need of figure out a way of changing the phase of a signal. Objective is to generate two signals with one phase changed and observe the patters when combined.
below is the program I'm using so far:
As in the above setting, I need to use the same signal to generate a phase changed signal and later combine the two signals and observe patters.
Can someone help me out on this?
Thanks.
Using the right inlet of the [osc~] object is a valid way to set the phase of an oscillator but it isn't the only or even the most correct way. The right inlet only permits a float at the control level.
A more comprehensive manipulation of phase can be done at the signal level using the [phasor~], [cos~], [wrap~], and [+~] objects. Essentially, you are performing the same function as [osc~] with a technique called a table lookup using [phasor~] and [cos~]. You could read another table with [tabread4~] instead of [cos~] as well.
This technique keeps your oscillators in sync. You can manipulate the phase of your oscillators with other oscillators, table lookups, and still of course floats (so long as the phase value is between 0 and 1, hence the [wrap~] object).
phase modulation at the signal level
Afterwards, like the other examples here, you can add the signals together and write them to corresponding tables or output the signal chain or both.
Here's how you might do the same for a custom table lookup. Of course, you'd replace sometable with your custom table name and num-samp-in-some-table with the number of samples in your table.
signal level phase modulation with custom tables
Hope it helps!
To change the phase of an oscillator, use the right-hand side inlet.
Quoting Johannes Kreidler's Programming Electronic Music in Pd:
3.1.2.1.3 Phase
In Pd, you can also set membrane position for a sound wave where it should begin (or where it should jump to). This is called the phase of a wave. You can set the phase in Pd in the right inlet of the "osc~" object with numbers between 0 and 1:
A wave's entire period is encompassed by the range from 0 to 1. However, it is often spoken of in terms of degrees, where the entire period has 360 degrees. One speaks, for example, of a "90 degree phase shift". In Pd, the input for the phase would be 0.25.
So for instance, if you want to observe how two signals can become mute due to destructive interference, you can try something like this:
Note that I connected a bang to adjust simultaneously the phases of both signals. This is important, because while you can reset the phase of a signal to any value between 0.0 and 1.0 at any moment, the other oscillator won't be reset and therefore the results will be quite random (you never know at which phase value the other signal will be at!). So resetting both does the trick.

Resources