I'm trying to add a second iiwa robot to my simulation after figuring out how to add the first one. I'm currently attempting to move the first one using
iiwa_robot = plant.GetBodyByName("iiwa_link_0")
X_B = plant.EvalBodyPoseInWorld(plant_context, iiwa_robot)
tf = RigidTransform(p=[15, 10, 5])
plant.SetFreeBodyPose(plant_context,iiwa_robot,X_B.multiply(tf))
However I keep getting the error
Body 'iiwa_link_0' is not a free floating body
I've also tried the other iiwa_links_ which produced the same error. Am I supposed to change something in the sdf file? Is there a better way to do this that I'm just not thinking of?
When you add your first IIWA arm, you can call WeldFrames to put the robot at the desired location.
Here is the pseudo code
iiwa1 = Parser(plant).AddModelFromFile(iiwa_file, "iiwa1")
# Weld the link0 of iiwa1 with the world at a given location.
plant.WeldFrames(plant.world_frame(), plant.GetBodyByName("iiwa_link_0", iiwa1).body_frame(), RigidTransform(p=[15, 10, 5]))
You can similarly add the second IIWA arm and weld its link_0 to the world to a different location.
iiwa2 = Parser(plant).AddModelFromFile(iiwa_file, "iiwa2")
# Weld the link0 of iiwa2 with the world at a given location.
plant.WeldFrames(plant.world_frame(), plant.GetBodyByName("iiwa_link_0", iiwa2).body_frame(), RigidTransform(p=[10, 10, 5]))
Related
After updating my drake to the latest version, my previous simulation code cannot work because of the API change. I am wondering if there is an existing tutorial or example for this new visualization API.
I find a similar example online (https://github.com/RobotLocomotion/drake/issues/8576) but some of these header files cannot be found as well.
My previous implementation is for visualizing the simulation of a piecewise polynomial trajectory, which is as follows:
// Make a trajectory from these waypoints (first-order hold)
drake::trajectories::PiecewisePolynomial<double> trajectory_solution =
drake::trajectories::PiecewisePolynomial<double>::FirstOrderHold(t_solution, q_solution);
const auto traj_source = viz_builder.AddSystem<drake::systems::TrajectorySource<double>>(
trajectory_solution);
// Connect the trajectory source directly to the geometry poses
auto q_to_pose = viz_builder.AddSystem<drake::systems::rendering::MultibodyPositionToGeometryPose<double>>(viz_plant);
viz_builder.Connect(traj_source->get_output_port(), q_to_pose->get_input_port());
viz_builder.Connect(q_to_pose->get_output_port(), viz_scene_graph.get_source_pose_port(viz_plant_source_id));
// Create the visualizer
drake::geometry::ConnectDrakeVisualizer(&viz_builder, viz_scene_graph);
std::unique_ptr<drake::systems::Diagram<double>> viz_diagram = viz_builder.Build();
// Set up simulator and run
drake::systems::Simulator<double> simulator(*viz_diagram);
simulator.set_publish_every_time_step(true);
simulator.set_target_realtime_rate(1.0);
simulator.Initialize();
simulator.AdvanceTo(100);
After taking the suggestions from the answers, I updated the implementation:
drake::geometry::ConnectDrakeVisualizer(&viz_builder, viz_scene_graph);
to
drake::geometry::DrakeVisualizer<double> viz;
viz.AddToBuilder(&viz_builder, viz_scene_graph);
drake::geometry::ConnectDrakeVisualizer moved to drake::geometry::DrakeVisualizer::AddToBuilder
https://drake.mit.edu/doxygen_cxx/classdrake_1_1geometry_1_1_drake_visualizer.html#a532df1140a607611304fbd439519a4c5
I have some existing PyTorch codes with cuda() as below, while net is a MainModel.KitModel object:
net = torch.load(model_path)
net.cuda()
and
im = cv2.imread(image_path)
im = Variable(torch.from_numpy(im).unsqueeze(0).float().cuda())
I want to test the code in a machine without any GPU, so I want to convert the cuda-code into CPU version. I tried to look at some relevant posts regarding the CPU/GPU switch of PyTorch, but they are related to the usage of device and thus doesn't apply to my case.
As pointed out by kHarshit in his comment, you can simply replace .cuda() call with .cpu():
net.cpu()
# ...
im = torch.from_numpy(im).unsqueeze(0).float().cpu()
However, this requires changing the code in multiple places every time you want to move from GPU to CPU and vice versa.
To alleviate this difficulty, pytorch has a more "general" method .to().
You may have a device variable defining where you want pytorch to run, this device can also be the CPU (!).
for instance:
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
Once you determined once in your code where you want/can run, simply use .to() to send your model/variables there:
net.to(device)
# ...
im = torch.from_numpy(im).unsqueeze(0).float().to(device)
BTW,
You can use .to() to control the data type (.float()) as well:
im = torch.from_numpy(im).unsqueeze(0).to(device=device, dtype=torch.float)
PS,
Note that the Variable API has been deprecated and is no longer required.
net = torch.load(model_path, map_location=torch.device('cpu'))
Pytorch docs: https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-on-cpu-load-on-gpu
In calling ComputePointPairPenetration() from a QueryObject in Drake in Python in a Jupyter Notebook environment, ComputePointPairPenetration() reliably kills the kernel. I'm not sure what's causing it and couldn't figure out how to get any error message.
In case it's relevant I'm running pydrake locally on a Mac.
Here is relevant code:
builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=0.00001)
file_name = FindResource("models/robot.urdf")
model = Parser(plant).AddModelFromFile(file_name)
file_name = FindResource("models/object.urdf")
object_model = Parser(plant).AddModelFromFile(file_name)
plant.Finalize()
diagram = builder.Build()
# Run simulation...
# Get geometry info from scene graph
context = scene_graph.AllocateContext()
q_obj = scene_graph.get_query_output_port().Eval(context)
q_obj.ComputePointPairPenetration()
Edit:
#Sherm's comment fixed my problem :) Thank you so much!
For reference:
diagram_context = diagram.CreateDefaultContext()
scene_graph_context = scene_graph.GetMyContextFromRoot(diagram_context)
q_obj = scene_graph.get_query_output_port().Eval(scene_graph_context)
q_obj.ComputePointPairPenetration()
You created a local Context for scene_graph. Instead you want the full diagram context so that the ports are connected up properly (e.g. scene_graph has an input port that receives poses from MultibodyPlant). So the above should work if you ask the Diagram to create a Context, then ask for the SceneGraph subcontext for the calls you have above, rather than creating a standalone SceneGraph context.
This lets you extract the fully-connected subcontext:
scene_graph_context = scene_graph.GetMyContextFromRoot(diagram_context)
FTR Here's a similar formulation in a Drake Python unittest:
TestPlant.test_scene_graph_queries
Note that this takes an alternate route (using diagram.GetMutableSubsystemContext instead of scene_graph.GetMyContextFromroot), namely because it's doing scalar-type conversion as well.
If you're curious about scalar-type conversion (esp. if you're going to be doing optimization, e.g. needing AutoDiffXd), please see:
Drake C++ API: System Scalar Conversion Overview
Drake Python API: pydrake.systems.scalar_conversion
Additionally, here are examples of scalar-converting both MultibodyPlant and SceneGraph for testing InverseKinematics constraint classes:
inverse_kinematics.py: TestConstraints
I installed ROS melodic version in Ubuntu 18.04.
I'm running a rosbag in the background to mock cameras in messages rostopics.
I set the camera names in rosparams and iterated through it to capture each camera topics.
I'm using message_filter ApproximateTimeSynchronizer to get time synchronized data as mentioned in the official documentation,
http://wiki.ros.org/message_filters
But most of the time the callback function to ApproximateTimeSynchronizer is not being called/is having delay. The code snippet I'm using is given below:
What am I doing wrong here?
def camera_callback(*args):
pass # Other logic comes here
rospy.init_node('my_listener', anonymous=True)
camera_object_data = []
for camera_name in rospy.get_param('/my/cameras'):
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/camera_info'.format(camera_name), CameraInfo))
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/image_color_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/image_depth_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/points'.format(camera_name), PointCloud2)
topic_list = [filter_obj for filter_obj in camera_object_data]
ts = message_filters.ApproximateTimeSynchronizer(topic_list, 10, 1, allow_headerless=True)
ts.registerCallback(camera_callback)
rospy.spin()
Looking at your code, it seems correct. There is, however, a trouble with perhaps bad timestamps and ergo this synchronizer as well, see http://wiki.ros.org/message_filters/ApproximateTime for algorithm assumptions.
My recommendation is to write a corresponding node that publishes empty versions of these four msgs all at the same time. If it's still not working in this perfect scenario, there is an issue with the code above. If it is working just fine, then you need to pay attention to the headers.
Given that you have it as a bag file, you can step through the msgs on the command line and observe the timestamps as well. (Can also step within python).
$ rosbag play --pause recorded1.bag # step through msgs by pressing 's'
On time-noisy msgs with small payloads, I've just written a node to listen to all these msgs, and republish them all with the latest time found on any of them (for sync'd logging to csv). Not optimal, but it should reveal where the issue lies.
In the path tracer example in the SDK I want to add an OBJ file to trace, so I went over to the loadGeometry() function, and right after the last parallelogram creation, I added this code block
OptiXMesh mesh;
mesh.context = context;
loadMesh(mesh_file, mesh);
gis.push_back(mesh.geom_instance);
//setMaterial(gis.back(), diffuse, "diffuse_color", white);
note that gis is a GeometryInstance vector.
When I run it, the display window opens, and immediately closes and I get the following exceptions:
Exception thrown at 0x00007FFA2856A388 in optixPathTracer.exe: Microsoft C++ exception: optix::TypeMismatch at memory location 0x0000000E29EFF030.
Exception thrown at 0x00007FFA2856A388 in optixPathTracer.exe: Microsoft C++ exception: optix::Exception at memory location 0x0000000E29EFF5C0.
If I comment out the modified code block, it works fine.
How can I load an OBJ file to the tracer? Do I need to add something in the shaders / RT_PROGRAMs side?
Thank in advance!
P.S. I know that the loadMesh() function takes care of the material, but since the program doesn't work, I tried to set a material just like it's shown for all the other GeometryInstances, as demonstrated in the code block above.
About intersect function:
In the path tracer example the scene is made of parallelograms. In the parallelogram.cu file the intersect function is called by the OptiX pipeline to detect if a ray intersects a parallelogram (4 points shape). In OptixPathTracer.cpp, in the createParallelogram method, the intersection program is set to be the function in parallelogram.cu file.
When you load a mesh, it is (most commonly) made of triangles, so the intersect function that is being used is not appropriate to the geometry of the mesh (and I guess that explains the TypeMismatch error, but the message is not very clear and you did not post a complete example that allows to reproduce the error).
How to fix:
If you look at the ray casting example, in OptixRaycastingContext.cu there's an intersect function that's made for triangles. You should most probably copy that triangle intersection function to your cu file (don't forget to rename it: there's already an intercept function there that is used for parallelograms), then when you create the Geometry object for your mesh, call setIntersectionProgram with the triangle intersection function as parameter.
The other way:
You can also start working on the mesh viewer example and change the raytracing code (cu files) to do path tracing. It is a good exercice to understand how OptiX works.