I'm attempting to add a disturbance to my iiwa juggling robot's paddle
disturbance = self.builder.AddSystem(RandomSource(RandomDistribution.kGaussian, 7,
0.1))
# print("This is the disturbance", disturbance)
disturbance.set_name("disturbance")
adder = self.builder.AddSystem(Adder(2, 7)) #adds two inputs of size 1 each
#add the disturbance to the ball velocity with the adder
self.builder.Connect(disturbance.get_output_port(), adder.get_input_port(0))
self.builder.Connect(integrator.get_output_port(), adder.get_input_port(1)) #need to get the ball velocity output port?
self.builder.Connect(adder.get_output_port(), self.plant.get_actuation_input_port()) # need to get the input ball velocity to the plant?
I keep getting the following error:
DiagramBuilder: Cannot operate on ports of System plant until it has been registered using AddSystem
I'm not sure what this is referring to...
Related
https://www.youtube.com/watch?v=aVwxzDHniEw
I want to replicate the intro (Timestamp: 0:00 to 1:05) of the linked video in Manim. It doesn’t need to be exact; I just want the general flow of a curve gliding across the screen. How would I achieve this?
Here, I've left comments on what I'm doing.
from manim import *
class MovingScene(MovingCameraScene):
def construct(self):
#create a value tracker to move a dot at the end of the graph
k = ValueTracker(0)
#make a graph
axes = Axes(x_range=[0, 10, 1])
#plot a sine wave on the graph
graph = axes.plot(lambda x: np.sin(x))
#make a dot that will move along the graph
dot = always_redraw(
lambda: Dot().move_to(
axes.c2p(k.get_value(), np.sin(k.get_value()))
)
)
#add said dot to the scene, but not the graph so we don't see the axes
self.add(dot)
#add everything at once so that all of the actions happen at the same time
self.play(MoveAlongPath(self.camera.frame, graph), Create(graph), k.animate.set_value(10),run_time = 10)
self.wait()
Is there any way to find the Jacobian of a frame with respect to the joints of a given model (as opposed to the whole plant), or alternatively to determine which columns of the full plant Jacobian correspond to a given model’s joints? I’ve found MultibodyPlant.CalcJacobian*, but I’m not sure if those are the right methods.
I also tried mapping the JointIndex of each joint in the model to a column of MultibodyPlant.CalcJacobian*, but the results didn't make sense -- the joint indices are sequential (all of one model followed by all of the other), but the Jacobian columns look interleaved (a column corresponding to one model followed by one corresponding to the other).
Assuming you are computing with respect to velocities, you'll want to use Joint.velocity_start() and Joint.num_velocities() to create a mask or set of indices. If you are in Python, then you can use NumPy's array slicing to select the desired columns of your Jacobian.
(If you compute w.r.t. position, then make sure you use Joint.position_start() and Joint.num_positions().)
Example notebook:
https://nbviewer.jupyter.org/github/EricCousineau-TRI/repro/blob/eb7f11d/drake_stuff/notebooks/multibody_plant_jacobian_subset.ipynb
(TODO: Point to a more official source.)
Main code to pay attention to:
def get_velocity_mask(plant, joints):
"""
Generates a mask according to supplied set of ``joints``.
The binary mask is unable to preserve ordering for joint indices, thus
`joints` required to be a ``set`` (for simplicity).
"""
assert isinstance(joints, set)
mask = np.zeros(plant.num_velocities(), dtype=np.bool)
for joint in joints:
start = joint.velocity_start()
end = start + joint.num_velocities()
mask[start:end] = True
return mask
def get_velocity_indices(plant, joints):
"""
Generates a list of indices according to supplies list of ``joints``.
The indices are generated according to the order of ``joints``, thus
``joints`` is required to be a list (for simplicity).
"""
indices = []
for joint in joints:
start = joint.velocity_start()
end = start + joint.num_velocities()
for i in range(start, end):
indices.append(i)
return indices
...
# print(Jv1_WG1) # Prints 7 dof from a 14 dof plant
[[0.000 -0.707 0.354 0.707 0.612 -0.750 0.256]
[0.000 0.707 0.354 -0.707 0.612 0.250 0.963]
[1.000 -0.000 0.866 -0.000 0.500 0.612 -0.079]
[-0.471 0.394 -0.211 -0.137 -0.043 -0.049 0.000]
[0.414 0.394 0.162 -0.137 0.014 0.008 0.000]
[0.000 -0.626 0.020 0.416 0.035 -0.064 0.000]]
As I understand, without any application of external force (such as gravity), the center of mass of a multibody system should be constant if only internal torques (at the joints) are applied.
In PyDrake, I've set up a free-floating simulation of a box spacecraft with a 6 DoF arm attached. At the start, I set an initial velocity to the first joint using the following command:
plant.GetJointByName("Joint_1").set_angular_rate(plant_context, -2.5)
As expected, the whole multibody system is tumbling based on the inertia properties. However, while looking at the meshcat simulation, the spacecraft systems seemed to move away from the initial position. I checked the System Center of mass location using the following command:
print(plant.CalcCenterOfMassPosition(plant_context))
And indeed, the command returns different position vectors at the beginning and end of the simulation. As I understand this should not happen as no forces are applied to the system except the initial angular rate which should only introduce angular momentum to the system and not linear momentum and hence no translation of the CoM but the simulation is different than this.
I wonder if this is an artifact of the integrator being used or some other phenomenon in drake that I am missing or setting up incorrectly.
The Pastebin of the urdf file can be found here: https://pastebin.com/rUVRkBBf
The pydrake script which I'm running is here:
builder = DiagramBuilder()
# Adds both MultibodyPlant and the SceneGraph, and wires them together.
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=0)
# Note that we parse into both the plant and the scene_graph here.
Parser(plant, scene_graph).AddModelFromFile(flairSCpath)
# Don't Weld Frame for Free-Floating Bodies.
# plant.WeldFrames(plant.world_frame(), plant.GetFrameByName("Base_SC"))
# Reduce/remove gravity
plantGravityField = plant.gravity_field()
plantGravityField.set_gravity_vector([0,0,0])
plant.Finalize()
# Adds the MeshcatVisualizer and wires it to the SceneGraph.
meshcat = ConnectMeshcatVisualizer(builder, scene_graph, zmq_url=zmq_url, delete_prefix_on_load=True)
diagram = builder.Build()
context = diagram.CreateDefaultContext()
plant_context = plant.GetMyMutableContextFromRoot(context)
# plant.SetPositions(plant_context, [0, 0, 0, 0, 0, 0])
plant.get_actuation_input_port().FixValue(plant_context, np.zeros(6))
# plant.GetJointByName("Joint_1").set_angular_rate(context, -2.5)
plant.GetJointByName("Joint_1").set_angular_rate(plant_context, -2.5)
# Print CoM Value at the Start
print("CoM Location Before Simulation:")
print(plant.CalcCenterOfMassPosition(plant_context))
simulator = Simulator(diagram, context)
# simulator.set_target_realtime_rate(1.0)
meshcat.load()
# MakeJointSlidersThatPublishOnCallback(plant, meshcat, context);
# meshcat.start_recording()
simulator.AdvanceTo(60.0 if running_as_notebook else 0.1)
# meshcat.stop_recording()
# meshcat.publish_recording()
print("CoM Location After Simulation")
print(plant.CalcCenterOfMassPosition(plant_context))
The movement of CoM is present even when different timesteps are set for the integrator. My worry is that the system CoM shouldn't translate without any forces applied but it does and this means I'm probably setting up the simulation incorrectly.
There is another check that there are no external forces acting on your system.
There is a C++ method (similarly in Python) whose signature is something like:
SpatialMomentum<T> MultibodyPlant::CalcSpatialMomentumInWorldAboutPoint(
const systems::Context<T>& context,
const Vector3<T>& p_WoP_W)
If the net external forces on your system is zero, the .translational() component of the returned spatial momentum should remain constant. This is true, regardless of what you pass in for p_WoP_W.
If there are no external forces on your system (hence the net external moment on your system is zero), the .rotational() component of the returned spatial momentum should remain constant as long as p_WoP_W is p_WoScm_W (position vector from World origin Wo to the system center of mass Scm, expressed in world W) or the zero vector.
Here is my quick reproduction, in case it helps. I've confirmed that setting a very small target accuracy results in the same numbers, set the initial angular velocity down so it looks more reasonable, confirmed that the gravity is properly being zeroed, etc.
import numpy as np
import pydrake.all
import pydot
builder = pydrake.systems.framework.DiagramBuilder()
# Adds both MultibodyPlant and the SceneGraph, and wires them together.
plant, scene_graph = pydrake.multibody.plant.AddMultibodyPlantSceneGraph(builder, time_step=0)
# Note that we parse into both the plant and the scene_graph here.
flair = pydrake.multibody.parsing.Parser(plant, scene_graph).AddModelFromFile("/home/russt/Downloads/flair.urdf")
# Don't Weld Frame for Free-Floating Bodies.
# plant.WeldFrames(plant.world_frame(), plant.GetFrameByName("Base_SC"))
# Reduce/remove gravity
plantGravityField = plant.gravity_field()
plantGravityField.set_gravity_vector([0,0,0])
plant.Finalize()
# Adds the MeshcatVisualizer and wires it to the SceneGraph.
meshcat = pydrake.systems.meshcat_visualizer.ConnectMeshcatVisualizer(builder, scene_graph, zmq_url="tcp://127.0.0.1:6000", delete_prefix_on_load=True)
diagram = builder.Build()
context = diagram.CreateDefaultContext()
plant_context = plant.GetMyMutableContextFromRoot(context)
# plant.SetPositions(plant_context, [0, 0, 0, 0, 0, 0])
plant.get_actuation_input_port().FixValue(plant_context, np.zeros(6))
plant.GetJointByName("Joint_1").set_angular_rate(plant_context, -.25)
plant.get_actuation_input_port().FixValue(plant_context, np.zeros(6))
pydot.graph_from_dot_data(plant.GetTopologyGraphvizString())[0].write_svg("flair_topology.svg")
simulator = pydrake.systems.analysis.Simulator(diagram, context)
simulator.get_mutable_integrator().set_target_accuracy(1e-10)
# Print CoM Value at the Start
print("CoM Location Before Simulation:")
print(plant.CalcCenterOfMassPosition(plant_context, [flair]))
simulator.AdvanceTo(60.0)
print("CoM Location After Simulation")
print(plant.CalcCenterOfMassPosition(plant_context, [flair]))
So, finally, the answer is indeed from the comments to the question. The set_angular_rate() function does set the initial angular rate between two bodies but while enforcing this constraint, it also causes an initial velocity of the center of mass. A better way to move the joint to quickly test a free-floating system is by using get_actuation_input_port().FixValue() function instead. A quick example of this is given below (which has no CoM movement):
builder = DiagramBuilder()
# Adds both MultibodyPlant and the SceneGraph, and wires them together.
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=0)
# Note that we parse into both the plant and the scene_graph here.
Parser(plant, scene_graph).AddModelFromFile(flairSCpath)
# Don't Weld Frame for Free-Floating Bodies.
# plant.WeldFrames(plant.world_frame(), plant.GetFrameByName("Base_SC"))
# Reduce/remove gravity
plantGravityField = plant.gravity_field()
plantGravityField.set_gravity_vector([0,0,0])
plant.Finalize()
# Adds the MeshcatVisualizer and wires it to the SceneGraph.
meshcat = ConnectMeshcatVisualizer(builder, scene_graph, zmq_url=zmq_url, delete_prefix_on_load=True)
diagram = builder.Build()
context = diagram.CreateDefaultContext()
plant_context = plant.GetMyMutableContextFromRoot(context)
# plant.SetPositions(plant_context, [0, 0, 0, 0, 0, 0])
jointAcutation = np.zeros((6,1))
jointAcutation[0] = 1
plant.get_actuation_input_port().FixValue(plant_context, jointAcutation)
# Print CoM Value at the Start
print("CoM Location Before Simulation:")
print(plant.CalcCenterOfMassPosition(plant_context))
simulator = Simulator(diagram, context)
# simulator.set_target_realtime_rate(1.0)
meshcat.load()
# MakeJointSlidersThatPublishOnCallback(plant, meshcat, context);
# meshcat.start_recording()
simulator.AdvanceTo(60.0 if running_as_notebook else 0.1)
# meshcat.stop_recording()
# meshcat.publish_recording()
print("CoM Location After Simulation")
print(plant.CalcCenterOfMassPosition(plant_context))
Thanks to Russ Tedrake and Sherm to help me (and future users hopefully) to understand this.
I'm creating a university project with OpenCV Python and ArUco markers, where I would like to get a (relatively) robust pose estimation for the movement of the camera. I plan on using this for indoor drone flight graphing. For this, I have to transform the camera pose to world coordinates defined by the first seen marker.
I know there must be a transformation matrix between the markers, but I can't seem to figure out, what it is. I am trying with the difference of respective rvecs.
The code for the function in Python:
def TransformBetweenMarkers(tvec_m, tvec_n, rvec_m, rvec_n):
tvec_m = np.transpose(tvec_m) # tvec of 'm' marker
tvec_n = np.transpose(tvec_n) # tvec of 'n' marker
dtvec = tvec_m - tvec_n # vector from 'm' to 'n' marker in the camera's coordinate system
# get the markers' rotation matrices respectively
R_m = cv2.Rodrigues(rvec_m)[0]
R_n = cv2.Rodrigues(rvec_n)[0]
tvec_mm = np.matmul(-R_m.T, tvec_m) # camera pose in 'm' marker's coordinate system
tvec_nn = np.matmul(-R_n.T, tvec_n) # camera pose in 'n' marker's coordinate system
# translational difference between markers in 'm' marker's system,
# basically the origin of 'n'
dtvec_m = np.matmul(-R_m.T, dtvec)
# this gets me the same as tvec_mm,
# but this only works, if 'm' marker is seen
# tvec_nm = dtvec_m + np.matmul(-R_m.T, tvec_n)
# something with the rvec difference must give the transformation(???)
drvec = rvec_m-rvec_n
drvec_m = np.transpose(np.matmul(R_m.T, np.transpose(drvec))) # transformed to 'm' marker
dR_m = cv2.Rodrigues(drvec_m)[0]
# I want to transform tvec_nn with a single matrix,
# so it would be interpreted in 'm' marker's system
tvec_nm = dtvec_m + np.matmul(dR_m.T, tvec_nn)
# objective: tvec_mm == tvec_nm
This is the best I could get, but there is still an error value of +-0.03 meters between the tvec_mm and tvec_nm translation values.
Is it possible to get any better with this? Is this even a legit transformation or just a huge coincidence, that it gives approximately the same values? Any ideas?
Thank you!
I am trying to find a simple algorithm to find the correspondence between two sets of 2D points (registration). One set contains the template of an object I'd like to find and the second set mostly contains points that belong to the object of interest, but it can be noisy (missing points as well as additional points that do not belong to the object). Both sets contain roughly 40 points in 2D. The second set is a homography of the first set (translation, rotation and perspective transform).
I am interested in finding an algorithm for registration in order to get the point-correspondence. I will be using this information to find the transform between the two sets (all of this in OpenCV).
Can anyone suggest an algorithm, library or small bit of code that could do the job? As I'm dealing with small sets, it does not have to be super optimized. Currently, my approach is a RANSAC-like algorithm:
Choose 4 random points from set 1 and from set 2.
Compute transform matrix H (using openCV getPerspective())
Warp 1st set of points using H and test how they aligned to the 2nd set of points
Repeat 1-3 N times and choose best transform according to some metric (e.g. sum of squares).
Any ideas? Thanks for your input.
With python you can use Open3D librarry, wich is very easy to install in Anaconda. To your purpose ICP should work fine, so we'll use the classical ICP, wich minimizes point-to-point distances between closest points in every iteration. Here is the code to register 2 clouds:
import numpy as np
import open3d as o3d
# Parameters:
initial_T = np.identity(4) # Initial transformation for ICP
distance = 0.1 # The threshold distance used for searching correspondences
(closest points between clouds). I'm setting it to 10 cm.
# Read your point clouds:
source = o3d.io.read_point_cloud("point_cloud_1.xyz")
target = o3d.io.read_point_cloud("point_cloud_0.xyz")
# Define the type of registration:
type = o3d.pipelines.registration.TransformationEstimationPointToPoint(False)
# "False" means rigid transformation, scale = 1
# Define the number of iterations (I'll use 100):
iterations = o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration = 100)
# Do the registration:
result = o3d.pipelines.registration.registration_icp(source, target, distance, initial_T, type, iterations)
result is a class with 4 things: the transformation T(4x4), 2 metrict (rmse and fitness) and the set of correspondences.
To acess the transformation:
I used it a lot with 3D clouds obteined from Terrestrial Laser Scanners (TLS) and from robots (Velodiny LIDAR).
With MATLAB:
We'll use the point-to-point ICP again, because your data is 2D. Here is a minimum example with two point clouds random generated inside a triangle shape:
% Triangle vértices:
V1 = [-20, 0; -10, 10; 0, 0];
V2 = [-10, 0; 0, 10; 10, 0];
% Create clouds and show pair:
points = 5000
N1 = criar_nuvem_triangulo(V1,points);
N2 = criar_nuvem_triangulo(V2,points);
pcshowpair(N1,N2)
% Registrate pair N1->N2 and show:
[T,N1_tranformed,RMSE]=pcregistericp(N1,N2,'Metric','pointToPoint','MaxIterations',100);
pcshowpair(N1_tranformed,N2)
"criar_nuvem_triangulo" is a function to generate random point clouds inside a triangle:
function [cloud] = criar_nuvem_triangulo(V,N)
% Function wich creates 2D point clouds in triangle format using random
% points
% Parameters: V = Triangle vertices (3x2 Matrix)| N = Number of points
t = sqrt(rand(N, 1));
s = rand(N, 1);
P = (1 - t) * V(1, :) + bsxfun(#times, ((1 - s) * V(2, :) + s * V(3, :)), t);
points = [P,zeros(N,1)];
cloud = pointCloud(points)
end
results:
You may just use cv::findHomography. It is a RANSAC-based approach around cv::getPerspectiveTransform.
auto H = cv::findHomography(srcPoints, dstPoints, CV_RANSAC,3);
Where 3 is the reprojection threshold.
One traditional approach to solve your problem is by using point-set registration method when you don't have matching pair information. Point set registration is similar to method you are talking about.You can find matlab implementation here.
Thanks