Visualizing a line in drake visualizer with C++ - drake

The question is related to Is there a way of visualising a line in drake visualizer where I had asked about how to visualize a line in the drake visualizer (about 3 years ago, which worked fine with v0.10.0). I am trying to achieve the same with the new API and was wondering if there was any example/documentation which can guide me on how to publish a line onto the visualizer. My previous method used for publishing a line looks like:
void publishLine(const std::vector<std::vector<double>>& pts,
const std::vector<std::string>& path, lcm::DrakeLcm& lcm,
std::vector<double> color) {
long long int now = getUnixTime() * 1000 * 1000;
nlohmann::json j = {{"timestamp", now},
{
"setgeometry",
{{{"path", path},
{"geometry",
{
{"type", "line"},
{"points", pts},
{"color", color},
{"radius", 0.1},
}}}},
},
{"settransform", nlohmann::json({})},
{"delete", nlohmann::json({})}};
auto msg = robotlocomotion::viewer2_comms_t();
msg.utime = now;
msg.format = "treeviewer_json";
msg.format_version_major = 1;
msg.format_version_minor = 0;
msg.data.clear();
for (auto& c : j.dump()) msg.data.push_back(c);
msg.num_bytes = j.dump().size();
// Use channel 0 for remote viewer communications.
lcm.get_lcm_instance()->publish("DIRECTOR_TREE_VIEWER_REQUEST_<0>", &msg);
}

You can use Meshcat::SetLine or Meshcat::SetLineSegments https://drake.mit.edu/doxygen_cxx/classdrake_1_1geometry_1_1_meshcat.html#aa5b082d79e267c040cbd066a11cdcb54
One caveat is that many browsers/webGL implementations do not support the linewidth property in ThreeJS. For thick lines, consider adding a cylinder using SetObject.

Related

How to send cluster in separated node ros pcl

Hi i'm new in pointcloud library. I'm trying to show clustering result point on rviz or pcl viewer, and then show nothing. And i realize that my data show nothing too when i subcsribe and cout that. Hopefully can help my problem, thanks
This is my code for clustering and send node
void cloudReceive(const sensor_msgs::PointCloud2ConstPtr& inputMsg){
mutex_lock.lock();
pcl::fromROSMsg(*inputMsg, *inputCloud);
cout<<inputCloud<<endl;
pcl::search::KdTree<pcl::PointXYZRGB>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZRGB>);
tree->setInputCloud(inputCloud);
std::vector<pcl::PointIndices> cluster_indices;
pcl::EuclideanClusterExtraction<pcl::PointXYZRGB> ec;
ec.setClusterTolerance(0.03);//2cm
ec.setMinClusterSize(200);//min points
ec.setMaxClusterSize(1000);//max points
ec.setSearchMethod(tree);
ec.setInputCloud(inputCloud);
ec.extract(cluster_indices);
if(cluster_indices.size() > 0){
std::vector<pcl::PointIndices>::const_iterator it;
int i = 0;
for (it = cluster_indices.begin(); it != cluster_indices.end(); ++it){
if(i >= 10)
break;
cloud_cluster[i]->points.clear();
std::vector<int>::const_iterator idx_it;
for (idx_it = it->indices.begin(); idx_it != it->indices.end(); idx_it++)
cloud_cluster[i]->points.push_back(inputCloud->points[*idx_it]);
cloud_cluster[i]->width = cloud_cluster[i]->points.size();
// cloud_cluster[i]->height = 1;
// cloud_cluster[i]->is_dense = true;
cout<<"PointCloud representing the Cluster: " << cloud_cluster[i]->points.size() << " data points"<<endl;
std::stringstream ss;
ss<<"cobaa_pipecom2_cluster_"<< i << ".pcd";
writer.write<pcl::PointXYZRGB> (ss.str(), *cloud_cluster[i], false);
pcl::toROSMsg(*cloud_cluster[i], outputMsg);
// cout<<"data = "<< outputMsg <<endl;
cloud_cluster[i]->header.frame_id = FRAME_ID;
pclpub[i++].publish(outputMsg);
// i++;
}
}
else
ROS_INFO_STREAM("0 clusters extracted\n");
}
And this one is the main
int main(int argc, char** argv){
for (int z = 0; z < 10; z++) {
// std::cout << " - clustering/" << z << std::endl;
cloud_cluster[z] = pcl::PointCloud<pcl::PointXYZRGB>::Ptr(new pcl::PointCloud<pcl::PointXYZRGB>);
cloud_cluster[z]->height = 1;
cloud_cluster[z]->is_dense = true;
// cloud_cluster[z]->header.frame_id = FRAME_ID;
}
ros::init(argc,argv,"clustering");
ros::NodeHandlePtr nh(new ros::NodeHandle());
pclsub = nh->subscribe("/pclsegmen",1,cloudReceive);
std::string pub_str("clustering/0");
for (int z = 0; z < 10; z++) {
pub_str[11] = z + 48;//48=0(ASCII)
// z++;
pclpub[z] = nh->advertise <sensor_msgs::PointCloud2> (pub_str, 1);
}
// pclpub = nh->advertise<sensor_msgs::PointCloud2>("/pclcluster",1);
ros::spin();
}
This isn't an exact answer, but I think it addresses your issue & may ease your debugging.
RViz can directly subscribe to a published point cloud, the one I'm assuming you're trying to see in the cloud_receive callback. If you set the Frame to whichever frame it's being published at, and add it from the available topics, you should see the points. (Easier than trying to rebroadcast it as different topics).
Also, I recommend looking at the rostopic command line tool. You can do rostopic list to check if it's being published, rostopic bw to see if it's really publishing the expected volume of data (ex bytes vs kilobytes vs megabytes), rostopic hz to see how frequently (if ever) it's publishing, and (briefly) rostopic echo to look at the data itself. (This is me assuming from your question it's more an issue with the data coming into your node).
If you're having trouble, not with data coming into the node, nor with the visualization of pointcloud data in general, but with the transformed data that's supposed to come out of the node, I would check that the clustering worked, & reduce your code moreso to just having 1 publisher publish something. You may be doing something weird. Like messing up your pointers. You could also turn on stronger compilation warnings for your node with -Wall -Wextra -Werror or step through the execution of it via gdb (launch-prefix="xterm -e gdb --args").
The solution is, i change the ASCII number into lexical_cast. Thanks for your response, i hope this can help other
for (int z = 0; z < CLOUD_QTD; z++) {
// pub_str[11] = z + 48;
std::string topicName = "/pclcluster/" + boost::lexical_cast<std::string>(z);
global::pub[z] = n.advertise <sensor_msgs::PointCloud2> (topicName, 1);
}

OpenCV detect and compute image features

Recently upgraded OpenCV from 3.4.5. to OpenCV 4.2.0.
Before I followed this stitching example: https://github.com/opencv/opencv/blob/5131619a1a4d1d3a860b5da431742cc6be945332/samples/cpp/stitching_detailed.cpp (particularly line 480). After upgrading, I altered the code to align more with this newer example: https://github.com/opencv/opencv/blob/master/samples/cpp/stitching_detailed.cpp (Note line 481).
Problem is with this new computeImageFeatures function, I am getting less detected features. Older code with same images gave me 1400+ features but computeImageFeatures gave me exactly 500 features per image. Any ideas how to "fix" this? I believe it also causes the "Bundle Adjuster" to fail later.
According to documentation of cv::ORB::create, default value of nfeatures argument is 500:
The first argument is nfeatures, you may set the first argument to grater number like 2000.
Here are the constructor arguments:
static Ptr<ORB> cv::ORB::create (int nfeatures = 500,
float scaleFactor = 1.2f,
int nlevels = 8,
int edgeThreshold = 31,
int firstLevel = 0,
int WTA_K = 2,
int scoreType = ORB::HARRIS_SCORE,
int patchSize = 31,
int fastThreshold = 20
)
Try modifying:
if (features_type == "orb")
{
finder = ORB::create();
}
to
if (features_type == "orb")
{
finder = ORB::create(2000);
}
In case you are not using ORB, but other type of features, read the documentation of the constructor.
I assume all types has a limiter argument.

PositionConstraint goal for robot arm: Unable to construct goal representation

I have a setting of ROS indigo, Gazebo under Ubuntu 14.04. Under ROS, moveit node is running. A robot arm IRB120 is simulated and standing in Gazebo. I have a node that uses moveit (move_group node) to plan a path (trajectory) for for the destination that Bob wants. The planned trajectory will be sent to Gazebo to be shown later.
There is two approaches that Bob can use to describe the destination:
Angles of each joints of the arm: using an array of six numbers (for six joints of the arm), the form of each joint and shin is defined. This approach works fine. It uses the JointConstraint class:
double goal_poses [] = {0.52, 0.50, 0.73, -0.02, 0.31, 6.83};
for(int i = 0 ; i < 6; i++) // iterate over joints of the arm.
{
moveit_msgs::JointConstraint jc;
jc.weight = 1.0;
jc.tolerance_above = 0.0001;
jc.tolerance_below = 0.0001;
jc.position = goal_poses[i];
jc.joint_name = names[i];
goal_constraint.joint_constraints.push_back(jc);
}
Define the location and direction of the end effector only. I can not use this approach. I have used the PositionConstraint class.
Problem in short: I can describe a destination using JointConstraint class, But I don't know how to describe it in PositionConstraint class. How to describe a goal, by just pointing out where the end effector should be?
How i describe the goal in PositionConstraint format: (I point out where the end effector should be and what it's orientation should be.)
moveit_msgs::PositionConstraint pc;
pc.weight = 1.0;
geometry_msgs::Pose p;
p.position.x = 0.3; // not sure if feasible position
p.position.y = 0.3; // not sure if feasible position
p.position.z = 0.3; // not sure if feasible position
pc.link_name="tool0";
p.orientation.x = 0;
p.orientation.y = 0;
p.orientation.z = 0;
p.orientation.w = 1;
pc.constraint_region.mesh_poses.push_back(p);
goal_constraint.position_constraints.push_back(pc);
But When the request is sent, server responds with:
[ERROR] [1527689581.951677797, 295.242000000]: Unable to construct goal representation
Note:
In both cases, I add the goal_constraint to the trajectory_request:
trajectory_request.goal.request.goal_constraints.push_back(goal_constraint);
// add other details to trajectory_request here...
trajectory_request is to be sent to the move_group. (by publishing the trajectory_request on the /move_group/goal topic)
A slightly different solution solved the problem of describing goal with end-effector orientation and location:
Instead of publishing the goal on a topic for another node to parse and read, we can use the moveit library function computeCartesianPath. (In this example the code to publish the trajectory is commented and partially missing)
void planTo(std::vector<double> coordinate, std::vector<double> orientation){
geometry_msgs::Pose p;
p.orientation.w = 1.0;
p.position.x = coordinate[0];
p.position.y = coordinate[1];
p.position.z = coordinate[2];
tf::Quaternion q = tf::createQuaternionFromRPY(
orientation[0],orientation[1],orientation[2]);
p.orientation.x = q.getX();
p.orientation.y = q.getY();
p.orientation.z = q.getZ();
p.orientation.w = q.getW();
std::vector<geometry_msgs::Pose> goals;
goals.push_back(p);
moveit::planning_interface::MoveGroup mg("manipulator");
mg.setStartStateToCurrentState();
// load the path in the `trajectory` variable:
moveit_msgs::RobotTrajectory trajectory;
mg.computeCartesianPath(goals, 0.01, 0.0, trajectory);
// publish to gazebo:
// trajectory.joint_trajectory.header.stamp = ros::Time::now();
// publisher.publish(trajectory.joint_trajectory);
}
I solved this a few months ago and unfortunately i do not remember the exact source/tutorial.

Understanding StereoMatching in Point Cloud Library

Situation: I am trying to get point cloud with pcl::AdaptiveCostSOStereoMatching, which uses two rectified images (pics are ok).
I used these tutorials to learn how to do this:
First tutorial
Second tutorial
Error: programm crashes in runtime when calling "compute" method of AdaptiveCostSOStereoMatching
Question: how to correctly pass images to "compute" method?
I tried:
1) Images converted by png2pcd
(command line: "png2pcd.exe in.png out.pcd")
2) Images converted with function below from cv::Mat
But no luck.
Function which converts cv::Mat to pcl::PointCloud
void MatToPointCloud(Mat& mat, pcl::PointCloud<RGB>::Ptr cloud)
{
int width = mat.cols;
int height = mat.rows;
pcl::RGB val;
val.r = 0; val.g = 0; val.b = 0;
for (int i = 0; i < mat.rows; i++)
for (int j = 0; j < mat.cols; j++)
{
auto point = mat.at<Vec3b>(i, j);
//std::cout << j << " " << i << "\n";
val.b = point[0];
val.g = point[1];
val.r = point[2];
cloud->at(j, i) = val;
}
}
pcl::AdaptiveCostSOStereoMatching (compute)
// Input
Mat leftMat, rightMat;
leftMat = imread("left.png");
rightMat = imread("right.png");
int width = leftMat.cols;
int height = rightMat.rows;
pcl::RGB val;
val.r = 0; val.g = 0; val.b = 0;
pcl::PointCloud<pcl::RGB>::Ptr left_cloud(new pcl::PointCloud<pcl::RGB>(width, height, val));
pcl::PointCloud<pcl::RGB>::Ptr right_cloud(new pcl::PointCloud<pcl::RGB>(width, height, val));
MatToPointCloud(leftMat, left_cloud);
MatToPointCloud(rightMat, right_cloud);
// Calculation
pcl::AdaptiveCostSOStereoMatching stereo;
stereo.setMaxDisparity(60);
//stereo.setXOffest(0); Почему-то не распознается
stereo.setRadius(5);
stereo.setSmoothWeak(20);
stereo.setSmoothStrong(100);
stereo.setGammaC(25);
stereo.setGammaS(10);
stereo.setRatioFilter(20);
stereo.setPeakFilter(0);
stereo.setLeftRightCheck(true);
stereo.setLeftRightCheckThreshold(1);
stereo.setPreProcessing(true);
stereo.compute(*left_cloud, *right_cloud); // <-- CRASHING THERE
stereo.medianFilter(4);
pcl::PointCloud<pcl::PointXYZRGB>::Ptr out_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);
stereo.getPointCloud(318.11220, 224.334900, 368.534700, 0.8387445, out_cloud, left_cloud);
Error information:
Output log: HEAP[App.exe]:
Heap block at 0000006B0F828460 modified at 0000006B0F8284A8 past requested size of 38
App.exe has triggered a breakpoint.
left_cloud (a right cloud looks like left_cloud)
Mini question: if AdaptiveCostSOStereoMatching really allows build point cloud from 2 images, how ACSSM doing this without insintric and excentic parameters?
Problem: I downloaded and installed old version of PCL without stereo.
After that, I downloaded stereo from other PCL pack and add this library to my PCL pack. And it worked incorrectly.
Solution: I compilled PCL 1.8 and my programm is ok now.
OS: Windows
IDE: MSVS 12 2013 x64
If you will try to compile PCL, these links can help you:
Official-tutorial-1
Official-tutorial-2
Good help with FLANN and VTK
Example to verify installation

Standard Hough Lines in EMGU CV

I am in need of using the standard Hough Transformation (instead of the using the HoughLinesBinary method which implements Probabilistic Hough Transform) and have attempted doing so by creating a custom version of the HoughLinesBinary method:
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
Seq<MCvMat> segments = new Seq<MCvMat>(lines, stor);
List<MCvMat> lineslist = segments.ToList();
foreach(MCvMat line in lineslist)
{
//Process lines: (rho, theta)
}
}
My problem is that I am unsure of what type is the sequence returned. I believe it should be MCvMat, due to reading the documentation that CvMat* is used in OpenCV, which also states that for STANDARD "the matrix must be (the created sequence will be) of CV_32FC2 type"
I am unclear as to what I would need to do to return and process that correct output data from the STANDARD hough lines (i.e. the 2x1 vector for each line giving the rho and theta information).
Any help would be greatly appreciated. Thank you
-Sal
I had the same problem myself a couple of days ago. This is how I solved it using marshalling. Please let me know if you find a simpler solution.
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
int maxLines = 100;
for(int i = 0; i < maxLines; i++)
{
IntPtr line = CvInvoke.cvGetSeqElem(lines, i);
if (line == IntPtr.Zero)
{
// No more lines
break;
}
PolarCoordinates coords = (PolarCoordinates)System.Runtime.InteropServices.Marshal.PtrToStructure(line, typeof(PolarCoordinates));
// Do something with your Hough lines
}
}
with a struct defined as follows:
public struct PolarCoordinates
{
public float Rho;
public float Theta;
}

Resources