How, having only data from the EMG sensor, to determine whether a person is in the REM phase? In other words, I need to detect the lowest level of activity from the sensor's EMG. Well, or at least register the phase change ...
In more detail... I'm going to make a REM phase detector using an EMG (electromyography) sensor. There is already a sketch of the Android application on the github, if you are interested, I can post a link. Although there is still work to be done...)
The device should work based on the fact that in different states of the brain (wakefulness, slow sleep, REM sleep), different levels of activity will be recorded from the sensor. In REM sleep, this activity is minimal.
The Bluetooth sensor is attached to the body before going to bed, the Android program is launched, communicates with the sensor and sends the data read from it to the connected TCP client via WiFi. TCP client - python script running on a nettop. It receives data, and by design should determine in real time whether the current level of activity is the minimum for the entire observation period. If so, the script will tell the server (Android program) to turn on the hint - it can be a vibration on the phone or a fitness bracelet, playing an audio sample through a headphone, light flashes, a slight electric shock =), etc.
Because only one EMG sensor is used, I admit that it will not be possible to catch REM sleep phases with 100% accuracy, but this is not necessary. If the accuracy is 80% - it's already good. For starters, even an algorithm for detecting a change in the current activity level is suitable. - There will be something to experiment with and something to build on.
The problem is with the algorithm. I would not like to use fixed thresholds, because these thresholds will be different for different people, and even for the same person at different times and in different states they will differ. Therefore, I will be glad to ideas and tips from your side.
I found an interesting article "A Self-adaptive Threshold Method for Automatic Sleep Stage Classifi-
cation Using EOG and EMG" (https://www.researchgate.net/publication/281722966_A_Self-adaptive_Threshold_Method_for_Automatic_Sleep_Stage_Classification_Using_EOG_and_EMG):
But there remains incomprehensible, a few points. First, how is the energy calculated (Step 1-4: Energy)?
d = f.read(epoche_seconds*SAMPLE_RATE*2)
if not d:
break
print('epoche#{}'.format(i))
fmt = '<{}H'.format(len(d) // 2)
t = unpack(fmt, d)
d = list(t)
d = np.array(d)
d = d / (1 << 14) # 14 - bits per sample
df=pd.DataFrame({'signal': d, 'id': [x*2 for x in range(len(d))]})
df = df.set_index('id')
d = df.signal
# signal prepare:
d = butter_bandpass_filter(d, lowcut, highcut, SAMPLE_RATE, order=2)
# extract signal futures:
future_iv = np.mean(np.absolute(d))
print('iv={}'.format(future_iv))
future_var = np.var(d)
print('var={}'.format(future_var))
ws = 6
df = pd.DataFrame({'signal': np.absolute(d)})
future_e = df.rolling(ws).sum()[ws-1:].max().signal
print('E={}'.format(future_e))
-- Will it be right?
Secondly, can someone elaborate on this:
Step 2-1: normalized processed
EMG and EOG feature vectors were processed with
normalized function before involved into classification
steps. Feature vectors were normalized as the follow-
ing function (4):
Where, Xmax and Xmin were got by following
steps: first, sort x(i) vector, and set window length N
represents 50 ; then compare 2Nk (k, values from
10 to 1) with the length of x(i) , if 2Nk is bigger
than the later one, reduce k , until 2Nk is lower than
the length of x(i). If length of x(i) is greater than
2Nk, compute the mean of 50 larger values as
Xmax, and the average of 50 smaller values as Xmin.
?
If you are looking for REM (deep sleep); a spectogram will show you intensity and frequency information on a 1 page graph/chart. People of a certain age refer to spectograms as TFFT - the wiki link is... https://en.wikipedia.org/wiki/Spectrogram
Can you input the data into a spectogram display/plot. Use a large FFT window (you probably have hours of data) with a small overlap (15%). I would recommend starting with an FFT window around 1 second.
A current source is exciting a Loud by an AC current of ±5mA. The voltage through the loud is measured using the NI data acquisition. The resistance of the loud changes with time, the peak-to-peak amplitude of the voltage signal changes accordingly. How to define the relationship between the loud's resistance and the voltage peak-to-peak amplitude?! in other words, how can I plot the graph of signal's peak-to-peak amplitude as a function of time in LabView?
measure with appropriate Sample-Rate, at least 10 times higher than the max. frequency of your signal.
use a DAQ, that has "synchronous sampling"
measure current and voltage (synchronously with high sample rate). You can use either a shunt or a current transducer for current measurement
Sample in "Blocks". This means: let the DAQ device store e.g. 10k Values (at a Sample rate of 100kHz) in it's internal memory, and read that buffer every 100 ms. Go to the Example finder (Help -> Find exampes) and look for "continous analog measurement" examples.
calculate the RMS-Value of both signals of each block and plot that in a graph. If you want it simple, feed both signals into a "Chart".
if the current is constant (which should be a strait line in the graph!), the voltage should rise over time, when the inner resistance of the loudspeaker rises ...
Note: be aware, that with the example numbers above (100kHz SR, Blocksizte 10k) calculating the RMS value will produce wrong results, when your signal main frequency is below 10 Hz!
Audio is made up of multiple frequencies occurring at any given time, and we can perform the FFT to get the Frequency bins, but what does the concept of Frequency mean when it comes to Sensor data?
For example, a Triaxial Accelerometer somehow converts a voltage signal and produces acceleration readings in ms^-2. Is the FFT performed with those X,Y,Z readings or the voltages sampled at Fs.
Am I overcomplicating things or is there a difference in mindset when performing DSP for Audio vs Sensor data?
A Fourier transform is tool to convert functions or signals into something that is easier to work with. It is a mathematical tool. The result can have an easy physical interpretation but that is not always the case.
Assume you have an object with constant mass and several periodic sin-like forces F_1*sin(c*t), F_2*sin(d*t), ... that act on the object. The total force is just the sum of those forces:
F(t) = F_1*sin(c*t) + F_2*sin(d*t) + ...
You get the particle's acceleration using Newton's 2nd law:
m * a(t) = F(t)
=> a(t) = F(t) / m = F_1/m * sin(c*t) + F_2/m * sin(d*t) + ...
Let's assume you measured a(t) but don't know the equation above. It you perform a Fourier transformation you can calculate the values of F_1/m, F_2/m, ... . That means your Fourier transform of the the acceleration is the amplitude of the force at the given frequency over the object's mass.
This interpretation works because the Fourier transform is linear and so is the adding of forces (See Newtons 2nd law). If you describe something non-linear chances are that there is no easy interpretation of the result of the transformation.
So when do you perform the FFT? It depends:
If you do it to improve you signal (remove noise) do it on the measured data.
If you want to analyse the physical object (resonances) do it on the acceleration.
(If the conversion is linear (ADC output to m/s^2 is a simple multiplication) it does not matter.)
I hope this makes things a bit clearer (at least from the physical point of view).
I am working in research where we are using smart-phone camera to monitor users heart-rate using color variation as signal.
What I did is getting the red color channel every 0.1 second (10Hz).
The problem is that I am trying to use an FFT to get different frequencies that exist in the extracted signal and I used this Java code where the FFT function get as input two arrays (one for real part and one for img part of complex numbers).
I saw also from this post that I can compute frequencies from the FFT function's results by using the formula:
freq = i * Fs / N
where Fs is the sampling rate and N is the number of points(input).
The problem is that my sampling rate Fs, is too low (10Hz) and if I use above formula I am getting very low frquencies. Is there any other way to get frequencies?
Trying to grasp a basic concept of how distancing with ibeacon (beacon/ Bluetooth-lowenergy/BLE) can work. Is there any true documentation on how far exactly an ibeacon can measure. Lets say I am 300 feet away...is it possible for an ibeacon to detect this?
Specifically for v4 &. v5 and with iOS but generally any BLE device.
How does Bluetooth frequency & throughput affect this? Can beacon devices enhance or restrict the distance / improve upon underlying BLE?
ie
| Range | Freq | T/sec | Topo |
|–—–––––––––––|–—––––––––––|–—––––––––––|–—––––––––––|
Bluetooth v2.1 | Up to 100 m | < 2.481ghz | < 2.1mbit | scatternet |
|-------------|------------|------------|------------|
Bluetooth v4 | ? | < 2.481ghz | < 305kbit | mesh |
|-------------|------------|------------|------------|
Bluetooth v5 | ? | < 2.481ghz | < 1306kbit | mesh |
The distance estimate provided by iOS is based on the ratio of the beacon signal strength (rssi) over the calibrated transmitter power (txPower). The txPower is the known measured signal strength in rssi at 1 meter away. Each beacon must be calibrated with this txPower value to allow accurate distance estimates.
While the distance estimates are useful, they are not perfect, and require that you control for other variables. Be sure you read up on the complexities and limitations before misusing this.
When we were building the Android iBeacon library, we had to come up with our own independent algorithm because the iOS CoreLocation source code is not available. We measured a bunch of rssi measurements at known distances, then did a best fit curve to match our data points. The algorithm we came up with is shown below as Java code.
Note that the term "accuracy" here is iOS speak for distance in meters. This formula isn't perfect, but it roughly approximates what iOS does.
protected static double calculateAccuracy(int txPower, double rssi) {
if (rssi == 0) {
return -1.0; // if we cannot determine accuracy, return -1.
}
double ratio = rssi*1.0/txPower;
if (ratio < 1.0) {
return Math.pow(ratio,10);
}
else {
double accuracy = (0.89976)*Math.pow(ratio,7.7095) + 0.111;
return accuracy;
}
}
Note: The values 0.89976, 7.7095 and 0.111 are the three constants calculated when solving for a best fit curve to our measured data points. YMMV
I'm very thoroughly investigating the matter of accuracy/rssi/proximity with iBeacons and I really really think that all the resources in the Internet (blogs, posts in StackOverflow) get it wrong.
davidgyoung (accepted answer, > 100 upvotes) says:
Note that the term "accuracy" here is iOS speak for distance in meters.
Actually, most people say this but I have no idea why! Documentation makes it very very clear that CLBeacon.proximity:
Indicates the one sigma horizontal accuracy in meters. Use this property to differentiate between beacons with the same proximity value. Do not use it to identify a precise location for the beacon. Accuracy values may fluctuate due to RF interference.
Let me repeat: one sigma accuracy in meters. All 10 top pages in google on the subject has term "one sigma" only in quotation from docs, but none of them analyses the term, which is core to understand this.
Very important is to explain what is actually one sigma accuracy. Following URLs to start with: http://en.wikipedia.org/wiki/Standard_error, http://en.wikipedia.org/wiki/Uncertainty
In physical world, when you make some measurement, you always get different results (because of noise, distortion, etc) and very often results form Gaussian distribution. There are two main parameters describing Gaussian curve:
mean (which is easy to understand, it's value for which peak of the curve occurs).
standard deviation, which says how wide or narrow the curve is. The narrower curve, the better accuracy, because all results are close to each other. If curve is wide and not steep, then it means that measurements of the same phenomenon differ very much from each other, so measurement has a bad quality.
one sigma is another way to describe how narrow/wide is gaussian curve.
It simply says that if mean of measurement is X, and one sigma is σ, then 68% of all measurements will be between X - σ and X + σ.
Example. We measure distance and get a gaussian distribution as a result. The mean is 10m. If σ is 4m, then it means that 68% of measurements were between 6m and 14m.
When we measure distance with beacons, we get RSSI and 1-meter calibration value, which allow us to measure distance in meters. But every measurement gives different values, which form gaussian curve. And one sigma (and accuracy) is accuracy of the measurement, not distance!
It may be misleading, because when we move beacon further away, one sigma actually increases because signal is worse. But with different beacon power-levels we can get totally different accuracy values without actually changing distance. The higher power, the less error.
There is a blog post which thoroughly analyses the matter: http://blog.shinetech.com/2014/02/17/the-beacon-experiments-low-energy-bluetooth-devices-in-action/
Author has a hypothesis that accuracy is actually distance. He claims that beacons from Kontakt.io are faulty beacuse when he increased power to the max value, accuracy value was very small for 1, 5 and even 15 meters. Before increasing power, accuracy was quite close to the distance values. I personally think that it's correct, because the higher power level, the less impact of interference. And it's strange why Estimote beacons don't behave this way.
I'm not saying I'm 100% right, but apart from being iOS developer I have degree in wireless electronics and I think that we shouldn't ignore "one sigma" term from docs and I would like to start discussion about it.
It may be possible that Apple's algorithm for accuracy just collects recent measurements and analyses the gaussian distribution of them. And that's how it sets accuracy. I wouldn't exclude possibility that they use info form accelerometer to detect whether user is moving (and how fast) in order to reset the previous distribution distance values because they have certainly changed.
The iBeacon output power is measured (calibrated) at a distance of 1 meter. Let's suppose that this is -59 dBm (just an example). The iBeacon will include this number as part of its LE advertisment.
The listening device (iPhone, etc), will measure the RSSI of the device. Let's suppose, for example, that this is, say, -72 dBm.
Since these numbers are in dBm, the ratio of the power is actually the difference in dB. So:
ratio_dB = txCalibratedPower - RSSI
To convert that into a linear ratio, we use the standard formula for dB:
ratio_linear = 10 ^ (ratio_dB / 10)
If we assume conservation of energy, then the signal strength must fall off as 1/r^2. So:
power = power_at_1_meter / r^2. Solving for r, we get:
r = sqrt(ratio_linear)
In Javascript, the code would look like this:
function getRange(txCalibratedPower, rssi) {
var ratio_db = txCalibratedPower - rssi;
var ratio_linear = Math.pow(10, ratio_db / 10);
var r = Math.sqrt(ratio_linear);
return r;
}
Note, that, if you're inside a steel building, then perhaps there will be internal reflections that make the signal decay slower than 1/r^2. If the signal passes through a human body (water) then the signal will be attenuated. It's very likely that the antenna doesn't have equal gain in all directions. Metal objects in the room may create strange interference patterns. Etc, etc... YMMV.
iBeacon uses Bluetooth Low Energy(LE) to keep aware of locations, and the distance/range of Bluetooth LE is 160ft (http://en.wikipedia.org/wiki/Bluetooth_low_energy).
Distances to the source of iBeacon-formatted advertisement packets are estimated from the signal path attenuation calculated by comparing the measured received signal strength to the claimed transmit power which the transmitter is supposed to encode in the advertising data.
A path loss based scheme like this is only approximate and is subject to variation with things like antenna angles, intervening objects, and presumably a noisy RF environment. In comparison, systems really designed for distance measurement (GPS, Radar, etc) rely on precise measurements of propagation time, in same cases even examining the phase of the signal.
As Jiaru points out, 160 ft is probably beyond the intended range, but that doesn't necessarily mean that a packet will never get through, only that one shouldn't expect it to work at that distance.
With multiple phones and beacons at the same location, it's going to be difficult to measure proximity with any high degree of accuracy. Try using the Android "b and l bluetooth le scanner" app, to visualize the signal strengths (distance) variations, for multiple beacons, and you'll quickly discover that complex, adaptive algorithms may be required to provide any form of consistent proximity measurement.
You're going to see lots of solutions simply instructing the user to "please hold your phone here", to reduce customer frustration.