Ros package for sensor Fusion (IMU and Pressure) data? - ros

Im looking for a ROS package (KF or UKF or EKF) that can fuse IMU and Pressure Sensors data. I would like to have 6x6 estimated Velocity matrices( linear and angular) from the IMU and Pressure sensor data. IMU is 9 DOF ( orientation, angular_velocity and linear_acceleration) and the Pressure. Barometer(pressure sensor data) can be use for the underwater robot as assume the sea (water ) level is same(constant) and the pressure suppose to maintain same value my linear movement of the underwater robot (vehicle). Is it possible to use this package to fuse this IMU and Pressure data to obtain estimated Velocity (linear and angular)?
If no existing ROS package (that serve as velocity observer) and fuse IMU and Pressure data, then any other code or help that I can use and implemented in ROS?
Thanks

You can use the pose_ekf as it will take imu and 3D/2D odometry. You will just need to convert the pressure into an odom message yourself. Otherwise, the hector localization package supports pressure as an input type by default.

Related

FMCW radar: understanding of doppler fft

I am using fmcw radar to find out distance and speed of moving objects using stm32l476 microcontroller. I transmit the modulation signal as sawtooth waveform and I read the recieved signal in the digital form using ADC function available. Then, I copy this recieved ADC data into fft_in array(converting it into float32_t)(fft_in array size = 512). After copying this fft_in array, I apply fft on this array and process it for finding out range of the object. Until here everything works fine.
Now, in order to find velocity of the object, first, I copy this arrays(fft_in) as rows of the matrix for 64 chirps(Matrix size[64][512]). Then, I take Peak range bin column and apply fft for this column array. So while processing this column array by applying fft, its length reduce to half[32 elements]. Then finding out peak value bin multiplied by frequnecy resolution gives the phase differnce 'w' from which velocity can be calculated as "𝐯=𝛌𝛚/𝟒𝛑𝐓 𝐜".
while running this algorithm, I find that when object is stationery, I get peak value at 22th element(out of 32 elements). what does this imply?
I have sampling frequency for ADC as 24502hz. So per bin value for range estimation is 47.8566hz (24502/512).
I have 64 chirps and Tc is 0.006325s. So 1/0.006325 gives 158.10Hz.What would be per velocity bin resolution, Is it 2.47Hz(158.10/64)? I have bit confusion in this concept.How does 2nd fft works for finding out velocity in fmcw radar?
Infineon has excellent resources on this topic, see this FAQ for the basics: https://www.infineon.com/dgdl/Infineon-Radar%20FAQ-PI-v02_00-EN.pdf?fileId=5546d46266f85d6301671c76d2a00614
If you want to know more details, check out the P2G Software User Manual:
https://www.infineon.com/dgdl/Infineon-P2G_Software_User_Manual-UserManual-v01_01-EN.pdf?fileId=5546d4627762291e017769040a233324 (Chapter 4)
There is even the software available with all the algorithms (including FMCW). How to get the software with the "Infineon Toolbox" is described here: https://www.mouser.com/pdfdocs/Infineon_Position2Go_QS.pdf
Some hints from me:
I suggest applying a window function before the fft https://en.wikipedia.org/wiki/Window_function and remove the mean.
Read about frequency mixers https://en.wikipedia.org/wiki/Frequency_mixer

How to republish sensor_msgs/FluidPressure as the geometry_msgs/PointStamped?

m doing some underwater UUV Gazebo simulation and would like to use the
hector_pose_estimation
package to fuse IMU and Fluid pressure sensors input for pose estimation. But when running the node I got ERROR
Client [/hector_pose_estimation] wants topic /rexrov2/pressure to have datatype/md5sum [geometry_msgs/PointStamped/c63aecb41bfdfd6b7e1fac37c7cbe7bf], but our version has [sensor_msgs/FluidPressure/804dc5cea1c5306d6a2eb80b9833befe]. Dropping connection.
So my Pressure Sensor has sensor_msgs/FluidPressure but the package wants pressure_height (geometry_msgs/PointStamped) as input. Any help how to How to republish sensor_msgs/FluidPressure as the geometry_msgs/PointStamped?
Tnaks
To my knowledge there is no node that does this direct translation. That being said, getting depth from a pressure is somewhat trivial. This can be done via the equation P=ρgh. Here:
P = Pressure in Pascals
ρ = Fluid density
g = Acceleration due to gravity
h = Depth
Roughly speaking the density for freshwater is 997.04 kg/m^3 and 1023.60 kg/m^3 for salt water. Acceleration, on Earth, is 9.8 m/s^2 and your pressure is coming off the ros topic. If you have a sensor_msgs/FluidPressure callback defined as pressure_callback it would look something like this
def pressure_callback(msg):
pressure = msg. fluid_pressure
depth = pressure / (997.04 * 9.8) #This assumes fresh water
output_msg = PointMessage()
output_msg.z = depth
some_publisher.publish(output_msg)

How to get robot's pose estimation data from the SLAM algorithm?

I would like to do an evaluation of the localization of the robot which is pose estimation between the SLAM algorithm (gmapping package) and the ground truth. I've acquired the ground truth data from the /ground_truth/state topic and I got the data like this. I got pose.position.xyz and orientation.xyzw for the ground truth data.
But now I dont know how to get the pose estimation data from the gmapping package. Where I can get the data? Is it in the /tf topic? If in the /tf topic which frame (base_link, base_scan, caster_back_link or imu_link) is the estimation data from the gmapping package? And how can I echo the data in the terminal same as the ground truth data? Thank you! Figure below is the frame data from the rviz application.
The gmapping package does not directly publish any pose. It will publish a topic /map which is an occupancy grid. It will also publish a transform to the map frame from odom; this is essentially a roundabout way of getting a pose. If you want a pose in this frame you need to create another node that takes in the current pose, most recent transform produced from gmapping, and apply it to the pose. This can be done with the tf package, for example:
listener = tf.TransformListener()
trans,rot = listener.lookupTransform('map', 'odom', rospy.Time(10))
listener.transformPose('odom', your_pose)

converting pointcloud data from mmwave sensor to laserscan

I am using ti mmwave 1642 evm sensor for generation of pointcloud data. For processing the data, I am using Intel NUC.
I am facing the problem of converting pointcloud data from mmwave sensor to laserscan.
By launching rviz_1642_2d.launch, I am able to see pointcloud data in rviz.
How to convert the pointcloud data, generated from mmwave sensor, to laserscan?
First of all, this conversion is not straight forward since a pointcloud describes an unordered set of 3d points in the world. A laser scan, on the other hand, is a well parametrized and ordered 2d description of equiangular distance measurements.
Therefore, converting a pointcloud into a laserscan will cause a massive loss in information.
However, there are packages like pointcloud_to_laserscan which does the conversion for you and furthermore, you can define how the conversion should be applied.

object detection and tracking

I have a function to detect an object held by a person but it is slow.
top_left_x, top_left_y, bottom_right_x, bottom_right_y = self.segment_image(depth_image_raw,cv_image)
I would like to do tracking for couple of frames in between to speed up the whole process. What is the best way to do this using opencv
I am using a ros node to acquire the frames from the kinect sensor

Resources