converting pointcloud data from mmwave sensor to laserscan - ros

I am using ti mmwave 1642 evm sensor for generation of pointcloud data. For processing the data, I am using Intel NUC.
I am facing the problem of converting pointcloud data from mmwave sensor to laserscan.
By launching rviz_1642_2d.launch, I am able to see pointcloud data in rviz.
How to convert the pointcloud data, generated from mmwave sensor, to laserscan?

First of all, this conversion is not straight forward since a pointcloud describes an unordered set of 3d points in the world. A laser scan, on the other hand, is a well parametrized and ordered 2d description of equiangular distance measurements.
Therefore, converting a pointcloud into a laserscan will cause a massive loss in information.
However, there are packages like pointcloud_to_laserscan which does the conversion for you and furthermore, you can define how the conversion should be applied.

Related

Eye to Hand Calibration OpenCV

I am new to this Eye to Hand calibration. I have read the opencv documentation. It says that we need to use cv2.calibrateHandEye(R_gripper2base, t_gripper2base, R_target2cam, t_target2cam).
Can somebody help me by clearly explaining what input values we need to provide, from where these input values need to be taken, and how it has to be in matrix format? Particularly for (R_target2cam, t_target2cam).I am using the UFactory Arm robot and Intel Realsense camera. So I need to calibrate both. Kindly guide me.
This is my Robot position coordinates
So, I think I have Rx,Ry,Rz for R_gripper2base. X,Y,Z for t_gripper2base. What and where can I get the values for R_target2cam, t_target2cam?

FMCW radar: understanding of doppler fft

I am using fmcw radar to find out distance and speed of moving objects using stm32l476 microcontroller. I transmit the modulation signal as sawtooth waveform and I read the recieved signal in the digital form using ADC function available. Then, I copy this recieved ADC data into fft_in array(converting it into float32_t)(fft_in array size = 512). After copying this fft_in array, I apply fft on this array and process it for finding out range of the object. Until here everything works fine.
Now, in order to find velocity of the object, first, I copy this arrays(fft_in) as rows of the matrix for 64 chirps(Matrix size[64][512]). Then, I take Peak range bin column and apply fft for this column array. So while processing this column array by applying fft, its length reduce to half[32 elements]. Then finding out peak value bin multiplied by frequnecy resolution gives the phase differnce 'w' from which velocity can be calculated as "𝐯=𝛌𝛚/𝟒𝛑𝐓 𝐜".
while running this algorithm, I find that when object is stationery, I get peak value at 22th element(out of 32 elements). what does this imply?
I have sampling frequency for ADC as 24502hz. So per bin value for range estimation is 47.8566hz (24502/512).
I have 64 chirps and Tc is 0.006325s. So 1/0.006325 gives 158.10Hz.What would be per velocity bin resolution, Is it 2.47Hz(158.10/64)? I have bit confusion in this concept.How does 2nd fft works for finding out velocity in fmcw radar?
Infineon has excellent resources on this topic, see this FAQ for the basics: https://www.infineon.com/dgdl/Infineon-Radar%20FAQ-PI-v02_00-EN.pdf?fileId=5546d46266f85d6301671c76d2a00614
If you want to know more details, check out the P2G Software User Manual:
https://www.infineon.com/dgdl/Infineon-P2G_Software_User_Manual-UserManual-v01_01-EN.pdf?fileId=5546d4627762291e017769040a233324 (Chapter 4)
There is even the software available with all the algorithms (including FMCW). How to get the software with the "Infineon Toolbox" is described here: https://www.mouser.com/pdfdocs/Infineon_Position2Go_QS.pdf
Some hints from me:
I suggest applying a window function before the fft https://en.wikipedia.org/wiki/Window_function and remove the mean.
Read about frequency mixers https://en.wikipedia.org/wiki/Frequency_mixer

Ros package for sensor Fusion (IMU and Pressure) data?

Im looking for a ROS package (KF or UKF or EKF) that can fuse IMU and Pressure Sensors data. I would like to have 6x6 estimated Velocity matrices( linear and angular) from the IMU and Pressure sensor data. IMU is 9 DOF ( orientation, angular_velocity and linear_acceleration) and the Pressure. Barometer(pressure sensor data) can be use for the underwater robot as assume the sea (water ) level is same(constant) and the pressure suppose to maintain same value my linear movement of the underwater robot (vehicle). Is it possible to use this package to fuse this IMU and Pressure data to obtain estimated Velocity (linear and angular)?
If no existing ROS package (that serve as velocity observer) and fuse IMU and Pressure data, then any other code or help that I can use and implemented in ROS?
Thanks
You can use the pose_ekf as it will take imu and 3D/2D odometry. You will just need to convert the pressure into an odom message yourself. Otherwise, the hector localization package supports pressure as an input type by default.

Input representation in FFT for a given list of amplitudes and sampling rate

How to represent a use a sound wave (Sine wave, 1000Hz, 3sec, -3dBFS, 44.1kHz) in FFT program? The input for the program is list of amplitues and sampling rate.
I mean how to transform a sound file(Eg: XYZ.wav file) as input to FFT where one of the input argument needs to take a .dat file consisting of amplitudes and other input arguments needs to take sampling rate and if any necessary.
Typically when you execute a fft call you supply a one dimensional array which represents a curve in the time domain, often this is your audio curve however fft will transform any time series curve ... when you start from an audio file, say a wav file, you must transform the binary data into this floating point 1D array ... if its wav then the file will begin with a 44 byte header which details essential attributes like sample rate, bit depth and endianness ... the rest of the wav file is the payload ... depending on bit depth you will then need to parse a set of bytes then transform the data from typically 16 bit which will consume two bytes into an integer by doing some bit shifting ... to do that you need to be aware of notion of endianness (big or little endian) as well as handling interleaving of a multi-channel signal like stereo ... once you have the generated the floating point array representation just feed it into your fft call ... for starters ignore using a wav file and simply synthesize your own sin curve and feed this into a fft call just to confirm the known frequency in will result in that frequency represented in its frequency domain coming out of the fft call
The response back from a fft call (or DFT) will be a 1D array of complex numbers ... there is a simple formula to calculate the magnitude and phase of each frequency in this fft result set ... be aware of what a Nyquist Limit is and how to fold the freq domain array on top of itself to double the magnitude while only using half of the elements of this freq domain array ... element 0 of this freq domain array is your DC offset and each subsequent element is called a frequency bin which is separated from each other by a constant frequency increment also calculated by a simple formula ... pipe back if interested in what these formulas are
Now you can appreciate people who spend their entire careers pushing the frontiers of working the algos behind the curtain on these api calls ... slop chop slamming together 30 lines of api calls to perform all of above is probably available however its far more noble to write the code to perform all of above yourself by hand as I know it will open up new horizons to enable you to conquer more subtle questions
A super interesting aspect of transforming a curve in time domain into its frequency domain counterpart by making a fft call is that you have retained all of the information of your source signal ... to prove this point I highly suggest you take the next step and perform the symmetrical operation by transforming the output of your fft call back into the time domain
audio curve in time domain --> fft --> freq domain representation --> inverse fft --> back to original audio curve in time domain
this cycle of transformation is powerful as its a nice way to allow you to confirm your audio curve to freq domain is working

How to decode from scikit-learn embedding?

If I have a data matrix X, in which I want to learn a manifold embedding:
from sklearn.manifold import MDS
mds = MDS()
embedding = mds.fit_transform(X)
I can get back a 2D embedding/encoding of the original data X in the variable embedding.
Is there a way to "decode"/de-embed a given 2D point back to the original data dimension?
99% of embeddings used in ML are not injective thus there is no such thing as an inverse transformation (it is not even about being hard, it literally cannot exists as it transforms huge chunks of the space to a single point). In particular, MDS is not injective thus there is no way to go back .

Resources