How to republish sensor_msgs/FluidPressure as the geometry_msgs/PointStamped? - ros

m doing some underwater UUV Gazebo simulation and would like to use the
hector_pose_estimation
package to fuse IMU and Fluid pressure sensors input for pose estimation. But when running the node I got ERROR
Client [/hector_pose_estimation] wants topic /rexrov2/pressure to have datatype/md5sum [geometry_msgs/PointStamped/c63aecb41bfdfd6b7e1fac37c7cbe7bf], but our version has [sensor_msgs/FluidPressure/804dc5cea1c5306d6a2eb80b9833befe]. Dropping connection.
So my Pressure Sensor has sensor_msgs/FluidPressure but the package wants pressure_height (geometry_msgs/PointStamped) as input. Any help how to How to republish sensor_msgs/FluidPressure as the geometry_msgs/PointStamped?
Tnaks

To my knowledge there is no node that does this direct translation. That being said, getting depth from a pressure is somewhat trivial. This can be done via the equation P=ρgh. Here:
P = Pressure in Pascals
ρ = Fluid density
g = Acceleration due to gravity
h = Depth
Roughly speaking the density for freshwater is 997.04 kg/m^3 and 1023.60 kg/m^3 for salt water. Acceleration, on Earth, is 9.8 m/s^2 and your pressure is coming off the ros topic. If you have a sensor_msgs/FluidPressure callback defined as pressure_callback it would look something like this
def pressure_callback(msg):
pressure = msg. fluid_pressure
depth = pressure / (997.04 * 9.8) #This assumes fresh water
output_msg = PointMessage()
output_msg.z = depth
some_publisher.publish(output_msg)

Related

Algorithm for detecting the lowest level of EMG activity?

How, having only data from the EMG sensor, to determine whether a person is in the REM phase? In other words, I need to detect the lowest level of activity from the sensor's EMG. Well, or at least register the phase change ...
In more detail... I'm going to make a REM phase detector using an EMG (electromyography) sensor. There is already a sketch of the Android application on the github, if you are interested, I can post a link. Although there is still work to be done...)
The device should work based on the fact that in different states of the brain (wakefulness, slow sleep, REM sleep), different levels of activity will be recorded from the sensor. In REM sleep, this activity is minimal.
The Bluetooth sensor is attached to the body before going to bed, the Android program is launched, communicates with the sensor and sends the data read from it to the connected TCP client via WiFi. TCP client - python script running on a nettop. It receives data, and by design should determine in real time whether the current level of activity is the minimum for the entire observation period. If so, the script will tell the server (Android program) to turn on the hint - it can be a vibration on the phone or a fitness bracelet, playing an audio sample through a headphone, light flashes, a slight electric shock =), etc.
Because only one EMG sensor is used, I admit that it will not be possible to catch REM sleep phases with 100% accuracy, but this is not necessary. If the accuracy is 80% - it's already good. For starters, even an algorithm for detecting a change in the current activity level is suitable. - There will be something to experiment with and something to build on.
The problem is with the algorithm. I would not like to use fixed thresholds, because these thresholds will be different for different people, and even for the same person at different times and in different states they will differ. Therefore, I will be glad to ideas and tips from your side.
I found an interesting article "A Self-adaptive Threshold Method for Automatic Sleep Stage Classifi-
cation Using EOG and EMG" (https://www.researchgate.net/publication/281722966_A_Self-adaptive_Threshold_Method_for_Automatic_Sleep_Stage_Classification_Using_EOG_and_EMG):
But there remains incomprehensible, a few points. First, how is the energy calculated (Step 1-4: Energy)?
d = f.read(epoche_seconds*SAMPLE_RATE*2)
if not d:
break
print('epoche#{}'.format(i))
fmt = '<{}H'.format(len(d) // 2)
t = unpack(fmt, d)
d = list(t)
d = np.array(d)
d = d / (1 << 14) # 14 - bits per sample
df=pd.DataFrame({'signal': d, 'id': [x*2 for x in range(len(d))]})
df = df.set_index('id')
d = df.signal
# signal prepare:
d = butter_bandpass_filter(d, lowcut, highcut, SAMPLE_RATE, order=2)
# extract signal futures:
future_iv = np.mean(np.absolute(d))
print('iv={}'.format(future_iv))
future_var = np.var(d)
print('var={}'.format(future_var))
ws = 6
df = pd.DataFrame({'signal': np.absolute(d)})
future_e = df.rolling(ws).sum()[ws-1:].max().signal
print('E={}'.format(future_e))
-- Will it be right?
Secondly, can someone elaborate on this:
Step 2-1: normalized processed
EMG and EOG feature vectors were processed with
normalized function before involved into classification
steps. Feature vectors were normalized as the follow-
ing function (4):
Where, Xmax and Xmin were got by following
steps: first, sort x(i) vector, and set window length N
represents 50 ; then compare 2Nk (k, values from
10 to 1) with the length of x(i) , if 2Nk is bigger
than the later one, reduce k , until 2Nk is lower than
the length of x(i). If length of x(i) is greater than
2Nk, compute the mean of 50 larger values as
Xmax, and the average of 50 smaller values as Xmin.
?
If you are looking for REM (deep sleep); a spectogram will show you intensity and frequency information on a 1 page graph/chart. People of a certain age refer to spectograms as TFFT - the wiki link is... https://en.wikipedia.org/wiki/Spectrogram
Can you input the data into a spectogram display/plot. Use a large FFT window (you probably have hours of data) with a small overlap (15%). I would recommend starting with an FFT window around 1 second.

How to obtain centroidal momentum matrix explicitly for momentum-based control purpose

there are useful APIs to calc centroidal momentum, like CalcSpatialMomentumInWorldAboutPoint. But for momentum-based control, I ran into some problems with this API. During control there is:
l = A(q) * v
and I want to take this as a constraint to solve vdot (joint accelerations)
l_dot = A(q) * vdot + Adot(q) * v
where l is the stacked vector of linear and angular momentum, A(q) is the centroidal momentum matrix.
My questions are:
Is there any method to get the centroidal momentum matrix explicitly?
Currently drake use CalcBiasCenterOfMassTranslationalAcceleration to get the
translational part of Adot(q) * v, can we get the rotational part too?
#hongkai-dai discussed this at here, it is also mentioned in other literature [1][2]. Thanks very much for any suggestions.
[1] Koolen, Twan, et al. "Design of a momentum-based control framework and application to the humanoid robot atlas." International Journal of Humanoid Robotics 13.01 (2016): 1650007.
[2] Lee, Sung-Hee, and Ambarish Goswami. "A momentum-based balance controller for humanoid robots on non-level and non-stationary ground." Autonomous Robots 33.4 (2012): 399-414.

Is there a fundamental difference between DSP for Audio Signal Processing and Sensor Signal Processing?

Audio is made up of multiple frequencies occurring at any given time, and we can perform the FFT to get the Frequency bins, but what does the concept of Frequency mean when it comes to Sensor data?
For example, a Triaxial Accelerometer somehow converts a voltage signal and produces acceleration readings in ms^-2. Is the FFT performed with those X,Y,Z readings or the voltages sampled at Fs.
Am I overcomplicating things or is there a difference in mindset when performing DSP for Audio vs Sensor data?
A Fourier transform is tool to convert functions or signals into something that is easier to work with. It is a mathematical tool. The result can have an easy physical interpretation but that is not always the case.
Assume you have an object with constant mass and several periodic sin-like forces F_1*sin(c*t), F_2*sin(d*t), ... that act on the object. The total force is just the sum of those forces:
F(t) = F_1*sin(c*t) + F_2*sin(d*t) + ...
You get the particle's acceleration using Newton's 2nd law:
m * a(t) = F(t)
=> a(t) = F(t) / m = F_1/m * sin(c*t) + F_2/m * sin(d*t) + ...
Let's assume you measured a(t) but don't know the equation above. It you perform a Fourier transformation you can calculate the values of F_1/m, F_2/m, ... . That means your Fourier transform of the the acceleration is the amplitude of the force at the given frequency over the object's mass.
This interpretation works because the Fourier transform is linear and so is the adding of forces (See Newtons 2nd law). If you describe something non-linear chances are that there is no easy interpretation of the result of the transformation.
So when do you perform the FFT? It depends:
If you do it to improve you signal (remove noise) do it on the measured data.
If you want to analyse the physical object (resonances) do it on the acceleration.
(If the conversion is linear (ADC output to m/s^2 is a simple multiplication) it does not matter.)
I hope this makes things a bit clearer (at least from the physical point of view).

3D object position prediction using Kalman filter, variable time period

I'm working on advanced vision system which consist of two static cameras (used for obtaining accurate 3d object location) and some targeting device. Object detection and stereovision modules have been already done. Unfortunately, due to the delay of targeting system it is obligatory to develop a proper prediction module.
I did some tests using Kalman filter but it is working not accurate enough.
kalman = cv2.KalmanFilter(6,3,0)
...
kalman.statePre[0,0] = x
kalman.statePre[1,0] = y
kalman.statePre[2,0] = z
kalman.statePre[3,0] = 0
kalman.statePre[4,0] = 0
kalman.statePre[5,0] = 0
kalman.measurementMatrix = np.array([[1,0,0,0,0,0],[0,1,0,0,0,0],[0,0,1,0,0,0]],np.float32)
kalman.transitionMatrix = np.array([[1,0,0,1,0,0],[0,1,0,0,1,0],0,0,1,0,0,1],[0,0,0,1,0,0],[0,0,0,0,1,0],[0,0,0,0,0,1]],np.float32)
kalman.processNoiseCov = np.array([[1,0,0,0,0,0],[0,1,0,0,0,0],0,0,1,0,0,0],[0,0,0,1,0,0],[0,0,0,0,1,0],[0,0,0,0,0,1]],np.float32) * 0.03
kalman.measurementNoiseCov = np.array([[1,0,0],[0,1,0],0,0,1]],np.float32) * 0.003
I noticed that time periods between two frames are different each time (due to the various detection time).
How could I use last timestamp diff as an input? (Transition matrices?, controlParam?)
I want to determine the prediction time e.g want to predict position of object in 0,5sec or 1,5sec
I could provide example input 3d points.
Thanks in advance
1. How could I use last timestamp diff as an input? (Transition matrices?, controlParam?)
Step size is controlled through prediction matrix. You also need to adjust process noise covariance matrix to control uncertainty growth.
You are using a constant speed prediction model, so that p_x(t+dt) = p_x(t) + v_x(t)·dt will predict position in X with a time step dt (and the same for coords. Y and Z). In that case, your prediction matrix should be:
kalman.transitionMatrix = np.array([[1,0,0,dt,0,0],[0,1,0,0,dt,0],0,0,1,0,0,dt],[0,0,0,1,0,0],[0,0,0,0,1,0],[0,0,0,0,0,1]],np.float32)
I left the process noise cov. formulation as an exercise. Be careful with squaring or not squaring the dt term.
2. I want to determine the prediction time e.g want to predict position of object in 0,5sec or 1,5sec
You can follow two different approaches:
Use a small fixed dt (e.g. 0.02 sec for 50Hz) and calculate predictions in a loop until you reach your goal (e.g. get a new observation from your cameras).
Adjusting prediction and process noise matrices online to the desired dt (0,5 / 1,5 sec in your question) and execute a single prediction step.
If you are asking about how to anticipate the detection time of your cameras, that should be a different question and I am afraid I can't help you :-)

How to calculate distance from Wifi router using Signal Strength?

I would like to calculate the exact location of a mobile device inside a building ( so no GPS access)
I want to do this using the signal strength(in dBm) of at least 3 fixed wifi signals(3 fixed routers of which I know the position)
Google already does that and I would like to know how they figure out the exact location based on the this data
Check this article for more details : http://www.codeproject.com/Articles/63747/Exploring-GoogleGears-Wi-Fi-Geo-Locator-Secrets
FSPL depends on two parameters: First is the frequency of radio signals;Second is the wireless transmission distance. The following formula can reflect the relationship between them.
FSPL (dB) = 20log10(d) + 20log10(f) + K
d = distance
f = frequency
K= constant that depends on the units used for d and f
If d is measured in kilometers, f in MHz, the formula is:
FSPL (dB) = 20log10(d)+ 20log10(f) + 32.44
From the Fade Margin equation, Free Space Path Loss can be computed with the following equation.
Free Space Path Loss=Tx Power-Tx Cable Loss+Tx Antenna Gain+Rx Antenna Gain - Rx Cable Loss - Rx Sensitivity - Fade Margin
With the above two Free Space Path Loss equations, we can find out the Distance in km.
Distance (km) = 10(Free Space Path Loss – 32.44 – 20log10(f))/20
The Fresnel Zone is the area around the visual line-of-sight that radio waves spread out into after they leave the antenna. You want a clear line of sight to maintain strength, especially for 2.4GHz wireless systems. This is because 2.4GHz waves are absorbed by water, like the water found in trees. The rule of thumb is that 60% of Fresnel Zone must be clear of obstacles. Typically, 20% Fresnel Zone blockage introduces little signal loss to the link. Beyond 40% blockage the signal loss will become significant.
FSPLr=17.32*√(d/4f)
d = distance [km]
f = frequency [GHz]
r = radius [m]
Source : http://www.tp-link.com/en/support/calculator/
To calculate the distance you need signal strength and frequency of the signal. Here is the java code:
public double calculateDistance(double signalLevelInDb, double freqInMHz) {
double exp = (27.55 - (20 * Math.log10(freqInMHz)) + Math.abs(signalLevelInDb)) / 20.0;
return Math.pow(10.0, exp);
}
The formula used is:
distance = 10 ^ ((27.55 - (20 * log10(frequency)) + signalLevel)/20)
Example: frequency = 2412MHz, signalLevel = -57dbm, result = 7.000397427391188m
This formula is transformed form of Free Space Path Loss(FSPL) formula. Here the distance is measured in meters and the frequency - in megahertz. For other measures you have to use different constant (27.55). Read for the constants here.
For more information read here.
K = 32.44
FSPL = Ptx - CLtx + AGtx + AGrx - CLrx - Prx - FM
d = 10 ^ (( FSPL - K - 20 log10( f )) / 20 )
Here:
K - constant (32.44, when f in MHz and d in km, change to -27.55 when f in MHz and d in m)
FSPL - Free Space Path Loss
Ptx - transmitter power, dBm ( up to 20 dBm (100mW) )
CLtx, CLrx - cable loss at transmitter and receiver, dB ( 0, if no cables )
AGtx, AGrx - antenna gain at transmitter and receiver, dBi
Prx - receiver sensitivity, dBm ( down to -100 dBm (0.1pW) )
FM - fade margin, dB ( more than 14 dB (normal) or more than 22 dB (good))
f - signal frequency, MHz
d - distance, m or km (depends on value of K)
Note: there is an error in formulas from TP-Link support site (mising ^).
Substitute Prx with received signal strength to get a distance from WiFi AP.
Example: Ptx = 16 dBm, AGtx = 2 dBi, AGrx = 0, Prx = -51 dBm (received signal strength), CLtx = 0, CLrx = 0, f = 2442 MHz (7'th 802.11bgn channel), FM = 22. Result: FSPL = 47 dB, d = 2.1865 m
Note: FM (fade margin) seems to be irrelevant here, but I'm leaving it because of the original formula.
You should take into acount walls, table http://www.liveport.com/wifi-signal-attenuation may help.
Example: (previous data) + one wooden wall ( 5 dB, from the table ). Result: FSPL = FSPL - 5 dB = 44 dB, d = 1.548 m
Also please note, that antena gain dosn't add power - it describes the shape of radiation pattern (donut in case of omnidirectional antena, zeppelin in case of directional antenna, etc).
None of this takes into account signal reflections (don't have an idea how to do this). Probably noise is also missing. So this math may be good only for rough distance estimation.
the simple answer to your question would be Triangulation. Which is essentially the concept in all GPS devices, I would give this article a read to learn more about how Google goes about doing this: http://www.computerworld.com/s/article/9127462/FAQ_How_Google_Latitude_locates_you_?taxonomyId=15&pageNumber=2.
From my understanding, they use a service similar to Skyhook, which is a location software that determines your location based on your wifi/cellphone signals. In order to achieve their accuracy, these services have large servers of databases that store location information on these cell towers and wifi access points - they actually survey metropolitan areas to keep it up to date. In order for you to achieve something similar, I would assume you'd have to use a service like Skyhook - you can use their SDK ( http://www.skyhookwireless.com/location-technology/ ).
However, if you want to do something internal (like using your own routers' locations) - then you'd likely have to create an algorithm that mimics Triangulation. You'll have to find a way to get the signal_strength and mac_address of the device and use that information along with the locations of your routers to come up with the location. You can probably get the information about devices hooked up to your routers by doing something similar to this ( http://www.makeuseof.com/tag/check-stealing-wifi/ ).
Distance (km) = 10^((Free Space Path Loss – 92.45 – 20log10(f))/20)
In general, this is a really bad way of doing things due to multipath interference. This is definitely more of an RF engineering question than a coding one.
Tl;dr, the wifi RF energy gets scattered in different directions after bouncing off walls, people, the floor etc. There's no way of telling where you are by trianglation alone, unless you're in an empty room with the wifi beacons placed in exactly the right place.
Google is able to get away with this because they essentially can map where every wifi SSID is to a GPS location when any android user (who opts in to their service) walks into range. That way, the next time a user walks by there, even without a perfect GPS signal, the google mothership can tell where you are. Typically, they'll use that in conjunction with a crappy GPS signal.
What I have seen done is a grid of Zigbee or BTLE devices. If you know where these are laid out, you can used the combined RSS to figure out relatively which ones you're closest to, and go from there.
Don't care if you are a moderator. I wrote my text towards my audience not as a technical writer
All you guys need to learn to navigate with tools that predate GPS. Something like a sextant, octant, backstaff or an astrolabe.
If you have receive the signal from 3 different locations then you only need to measure the signal strength and make a ratio from those locations. Simple triangle calculation where a2+b2=c2. The stronger the signal strength the closer the device is to the receiver.

Resources