Using hector slam mapping for Ros navigation stack - localization

Following is the approach that I am adopting
I have built the map using the hector_slam package while controlling my robot using an RF transmitter.
For the move_base node I am using amcl localization parameters.
For the sake of testing my robot's position on the map, I drive the robot manually and I receive the error and the robot randomly moves all around the map.
[WARN]: The origin of the sensor is out of map bounds. so the costmap cannot raytrace for it
[WARN]: Map update loop missed its desired rate of 10.0000Hz...the loop actually took 0.1207 seconds.
Autonomous navigation will only work if this localization problem is solved and according to my naive point of view I guess it's the problem of building the map with hector slam and later on using amcl for localization.
Questions:
Is this approach possible?
Can I use hector slam for localization instead of amcl?
How do i make the robot localize itself in the pre-build map (using hector_slam) when I control it manually

Related

Simulate Multibody System with Partial Trajectory

In Python Drake, I have a MultibodyPlant system that I have defined from a URDF. I am working on a trajectory optimization that involves hand-calculated contact dynamics, and I want to test out the trajectory that I calculated using the built-in simulator in Drake. I'm currently visualizing the entire calculated trajectory using MultibodyPositionToGeometryPose with TrajectorySource on a complete numpy array of all bodies in my system.
However, I want to test out my trajectory with the system dynamics handled instead by Drake. Is there a way to pass in a trajectory for some of the positions (i.e. joints connected to some of the links), and let the Drake simulation calculate the rest of them? I'd like to do this before attempting to implement a stabilized LQR controller to evaluate just the accuracy of my hand-calculated contact dynamics.
That feature is coming soon in the newest contact solver in Drake (https://github.com/RobotLocomotion/drake/issues/16738), but as of today, you could do this using a PidController with the state_projection matrix to select the subset of joints you would want to command. Those joints will need to have actuators on them.

How to minimize the error integrating 3D angular velocity data obtained by the IMU to get linear velocity?

have IMU sensor that gives me the raw data such as orientation, Angular and Linear acceleration. Im using ROS and doing some Gazebo UUV simulation. Furthermore, I want to get linear velocity from the raw IMU data.
So the naive integration method is as follow: The simplest method for IMU state estimation is the naive integration of the measured data. We estimate the attitude by integrating 3D angular velocity data obtained by the IMU. Assuming that the time step Δ𝑡 is small, the attitude at each time step can be incrementally calculated.
If I do integration over time there will be accumulated error and will not be accurate with the time when for example the robot makes turns. So Im looking for some methods ( ROS packages or outside ROS framework) or code that can correct that error.
Any help?
I would first recommend that you try fitting your input sensor data into an EKF or UKF node from the robot_localization package. This package is the most used & most optimized pose estimation package in the ROS ecosystem.
It can handle 3D sensor input, but you would have to configure the parameters (there are no real defaults, all config). Besides the configuration docs above, the github has good examples of yaml parameter configurations (Ex.) (you'll want a separate file from the launch file) and example launch files (Ex.).

ROS Human-Robot mapping (Baxter)

I'm having some difficulties understanding the concept of teleoperation in ROS so hoping someone can clear some things up.
I am trying to control a Baxter robot (in simulation) using a HTC Vive device. I have a node (publisher) which successfully extracts PoseStamped data (containing pose data in reference to the lighthouse base stations) from the controllers and publishes this on separate topics for right and left controllers.
So now I wish to create the subscribers which receive the pose data from controllers and converts it to a pose for the robot. What I'm confused about is the mapping... after reading documentation regarding Baxter and robotics transformation, I don't really understand how to map human poses to Baxter.
I know I need to use IK services which essentially calculate the co-ordinates required to achieve a pose (given the desired location of the end effector). But it isn't as simple as just plugging in the PoseStamped data from the node publishing controller data to the ik_service right?
Like a human and robot anatomy is quite different so I'm not sure if I'm missing a vital step in this.
Seeing other people's example codes of trying to do the same thing, I see that some people have created a 'base'/'human' pose which hard codes co-ordinates for the limbs to mimic a human. Is this essentially what I need?
Sorry if my question is quite broad but I've been having trouble finding an explanation that I understand... Any insight is very much appreciated!
You might find my former student's work on motion mapping using a kinect sensor with a pr2 informative. It shows two methods:
Direct joint angle mapping (eg if the human has the arm in a right angle then the robot should also have the arm in a right angle).
An IK method that controls the robot's end effector based on the human's hand position.
I know I need to use IK services which essentially calculate the
co-ordinates required to achieve a pose (given the desired location of
the end effector). But it isn't as simple as just plugging in the
PoseStamped data from the node publishing controller data to the
ik_service right?
Yes, indeed, this is a fairly involved process! In both cases, we took advantage of the kinects api to access the human's joint angle values and the position of the hand. You can read about how Microsoft research implemented the human skeleton tracking algorithm here:
https://www.microsoft.com/en-us/research/publication/real-time-human-pose-recognition-in-parts-from-a-single-depth-image/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2F%3Fid%3D145347
I am not familiar with the Vive device. You should see if it offers a similar api for accessing skeleton tracking information since reverse engineering Microsoft's algorithm will be challenging.

How to use Gesture Recognition Toolkit (GRT) using its interface to train (HMM model) and test from datasets which have already been created?

I am currently working on a project-Air Writing using inertial sensors. Based on the 6-DOF (Degrees Of Freedom), accelerometer and gyroscope values received from the sensor placed on the finger, the system should identify the gesture made using the finger- basically characters ('a' to 'z'). I came across the Gesture Recognition Toolkit (GRT) which has a new interface in one of its recent updates. So, can someone help me in explaining as to how to train (using HMM) and test the output using the toolkit in its new interface? I'm new to machine learning, hence I don't exactly understand as to where to begin. This toolkit seems to be a short-path toward what I intend to achieve as the output that is identify characters written on air. I have created a database of the inertial sensor values for characters 'a', 'b', 'c' and 'd'. Now I want to know how to proceed through the Machine Learning part using this toolkit. Can anyone help me?

Using Augmented Reality libraries for Academic Project

I'm planning on doing my Final Year Project of my degree on Augmented Reality. It will be using markers and there will also be interaction between virtual objects. (sort of a simulation).
Do you recommend using libraries like ARToolkit, NyARToolkit, osgART for such project since they come with all the functions for tracking, detection, calibration etc? Will there be much work left from the programmers point of view?
What do you think if I use OpenCV and do the marker detection, recognition, calibration and other steps from scratch? Will that be too hard to handle?
I don't know how familiar you are with image or video processing, but writing a tracker from scratch will be very time-consuming if want it to return reliable results. The effort also depends on which kind of markers you plan to use. Artoolkit e.g. compares the marker's content detected from the video stream to images you earlier defined as markers. Hence it tries to match images and returns a value of probability that a certain part of the video stream is a predefined marker. Depending on the threshold you are going to use and the lighting situation, markers are not always recognized correctly. Then there are other markers like datamatrix, qrcode, framemarkers (used by QCAR) that encode an id optically. So there is no image matching required, all necessary data can be retrieved from the video stream. Then there are more complex approaches like natural feature tracking, where you can use predefined images, given that they offer enough contrast and points of interest so they can be recognized later by the tracker.
So if you are more interested in the actual application or interaction than in understanding how trackers work, you should base your work on an existing library.
I suggest you to use OpenCV, you will find high quality algorithms and it is fast. They are continuously developing new methods so soon it will be possible to run it real-time in mobiles.
You can start with this tutorial here.
Mastering OpenCV with Practical Computer Vision Projects
I did the exact same thing and found Chapter 2 of this book immensely helpful. They provide source code for the marker tracking project and I've written a framemarker generator tool. There is still quite a lot to figure out in terms of OpenGL, camera calibration, projection matrices, markers and extending it, but it is a great foundation for the marker tracking portion.

Resources