How display scan on rviz with sdf file? - ros

Good morning,
I ran a simulation of a hecacopter with a gazebo. I have an sdf file of my drone with a 3-D lidar.
I send the data of my lidar on the topic /scan, and I want to visualize it on rviz.
I saw that I had to make an urdf file of my drone, but I can't make the conversion. (and the sdf file is quite big)
Is there another way to display the data without having to do the conversion?
Thank you
Translated with www.DeepL.com/Translator (free version)

If you're publishing a (LaserScan/PointCloud) msg on topic /scan, and the header.frame_id is (for example) on frame lidar, then you can just look at the lidar frame in Rviz, and have a laser scan element subscribing to /scan. You don't need to render the whole vehicle if you don't want to.
Personally, I find just rendering a coordinate axis frame to be sufficient sometimes.

Related

Using hector slam mapping for Ros navigation stack

Following is the approach that I am adopting
I have built the map using the hector_slam package while controlling my robot using an RF transmitter.
For the move_base node I am using amcl localization parameters.
For the sake of testing my robot's position on the map, I drive the robot manually and I receive the error and the robot randomly moves all around the map.
[WARN]: The origin of the sensor is out of map bounds. so the costmap cannot raytrace for it
[WARN]: Map update loop missed its desired rate of 10.0000Hz...the loop actually took 0.1207 seconds.
Autonomous navigation will only work if this localization problem is solved and according to my naive point of view I guess it's the problem of building the map with hector slam and later on using amcl for localization.
Questions:
Is this approach possible?
Can I use hector slam for localization instead of amcl?
How do i make the robot localize itself in the pre-build map (using hector_slam) when I control it manually

How to minimize the error integrating 3D angular velocity data obtained by the IMU to get linear velocity?

have IMU sensor that gives me the raw data such as orientation, Angular and Linear acceleration. Im using ROS and doing some Gazebo UUV simulation. Furthermore, I want to get linear velocity from the raw IMU data.
So the naive integration method is as follow: The simplest method for IMU state estimation is the naive integration of the measured data. We estimate the attitude by integrating 3D angular velocity data obtained by the IMU. Assuming that the time step Δ𝑡 is small, the attitude at each time step can be incrementally calculated.
If I do integration over time there will be accumulated error and will not be accurate with the time when for example the robot makes turns. So Im looking for some methods ( ROS packages or outside ROS framework) or code that can correct that error.
Any help?
I would first recommend that you try fitting your input sensor data into an EKF or UKF node from the robot_localization package. This package is the most used & most optimized pose estimation package in the ROS ecosystem.
It can handle 3D sensor input, but you would have to configure the parameters (there are no real defaults, all config). Besides the configuration docs above, the github has good examples of yaml parameter configurations (Ex.) (you'll want a separate file from the launch file) and example launch files (Ex.).

ROS Human-Robot mapping (Baxter)

I'm having some difficulties understanding the concept of teleoperation in ROS so hoping someone can clear some things up.
I am trying to control a Baxter robot (in simulation) using a HTC Vive device. I have a node (publisher) which successfully extracts PoseStamped data (containing pose data in reference to the lighthouse base stations) from the controllers and publishes this on separate topics for right and left controllers.
So now I wish to create the subscribers which receive the pose data from controllers and converts it to a pose for the robot. What I'm confused about is the mapping... after reading documentation regarding Baxter and robotics transformation, I don't really understand how to map human poses to Baxter.
I know I need to use IK services which essentially calculate the co-ordinates required to achieve a pose (given the desired location of the end effector). But it isn't as simple as just plugging in the PoseStamped data from the node publishing controller data to the ik_service right?
Like a human and robot anatomy is quite different so I'm not sure if I'm missing a vital step in this.
Seeing other people's example codes of trying to do the same thing, I see that some people have created a 'base'/'human' pose which hard codes co-ordinates for the limbs to mimic a human. Is this essentially what I need?
Sorry if my question is quite broad but I've been having trouble finding an explanation that I understand... Any insight is very much appreciated!
You might find my former student's work on motion mapping using a kinect sensor with a pr2 informative. It shows two methods:
Direct joint angle mapping (eg if the human has the arm in a right angle then the robot should also have the arm in a right angle).
An IK method that controls the robot's end effector based on the human's hand position.
I know I need to use IK services which essentially calculate the
co-ordinates required to achieve a pose (given the desired location of
the end effector). But it isn't as simple as just plugging in the
PoseStamped data from the node publishing controller data to the
ik_service right?
Yes, indeed, this is a fairly involved process! In both cases, we took advantage of the kinects api to access the human's joint angle values and the position of the hand. You can read about how Microsoft research implemented the human skeleton tracking algorithm here:
https://www.microsoft.com/en-us/research/publication/real-time-human-pose-recognition-in-parts-from-a-single-depth-image/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2F%3Fid%3D145347
I am not familiar with the Vive device. You should see if it offers a similar api for accessing skeleton tracking information since reverse engineering Microsoft's algorithm will be challenging.

Displaying Google Tango scans with Hololens

I am currently working with Google Tango and Microsoft Hololens. I got the idea of scanning a room or an object using google Tango and then converting and showing it as hologram with the Hololens.
For that I need to get the ADF file on my computer.
Does someone know of a way to import adf-files onto a computer?
Do you know if it is possible to convert adf-files into usable 3d files?
An ADF is not a 3D scan of the room, it's a collection of feature descriptors from the computer vision algorithms with associated positional data, but the format is not documented.
You will want to use the point cloud from the depth sensor, convert it to a mesh (there are existing apps to do this) and import the mesh into a render engine on Hololens.

how to save laser-scan 3D point cloud as the format of ADF(google tango Area Description File)

the 3D point is generated by my laser-scaner, I want to save it as the format ADF,so Google Tango could use it
The short answer is... you probably can't.
There is no information on the ADF format but in any case it uses more than the 3D points from the depth camera. If you watch the Google IO videos it shows how it uses the angular camera to obtain some image features and recognize the environment. I guess using only 3D data would be too expensive and could not use information from distant points.

Resources