SNR & BER in GNU Radio - signal-processing

I have to determine the quality of a received signal in gnuradio and for this either I can measure it in terms of bit error or signal to noise (SNR)
Can you please tell me which is easier to perform and which of these two parameters' readings will be more accurate in gnuradio?

Here is a list of already available blocks that provide link quality in Gnuradio:
SNR estimation in gnuradio can be performed with MPSK_SNR_Estimator block
BER computaion in gnuradio is provided with BER block.
This is a very open question depending on your specific application. They could be easy and accurate both.

Related

How to calculate reflection with GNUradio?

I am preforming an experiment that involves a transmitter, material target, and two receivers (as a baseline). The goal is to record the RF reflectivity of the target. How can I calculate/measure this from the received signal, and can it be done in GNUradio-companion?
Any help is appreciated.
Thank You.
You can do that, in many ways. In the end, chances are you'll send some predefined signal, e.g., precomputed white pseudorandom noise from a "vector source", record that (e.g. using a "file sink" or a "vector sink") and build a correlation estimator that processes that data offline.
Of course, a correlation is just convolution with the (conjugate) time-inverse, so you can also (conjugate if complex and) time-reverse your reference signal, and use it as filter taps.
Note that in general, SDR devices are nice and linear, but not calibrated – you can only compare received signal powers, but you cannot attribute an absolute power to them – unless you know the strength of some reference reception.

How do I correct sensor drift from external environment?

I have an electromagnetic sensor which reports how much electromagnetic field strength it reads in space.
And I also have a device that emits electromagnetic field. It covers 1 meter area.
So I want to kind of predict position of the sensor using its reading.
But the sensor is affected by metal so it makes the position prediction drifts.
It's like if the reading is 1, and you put it near a metal, you get 2.
Something like that. It's not just noise, it's a permanent drift. Unless you remove the metal it will give reading 2 always.
What are the techniques or topics I need to learn in general to recover reading 1 from 2?
Suppose that the metal is fixed somewhere in space and I can calibrate the sensor by putting it near metal first.
You can suggest anything about removing the drift in general. Also please consider that I can have another emitter putting somewhere so I should be able to recover the true reading easier.
Let me suggest that you view your sensor output as a combination of two factors:
sensor_output = emitter_effect + environment_effect
And you want to obtain emitter_effect without the addition of environment_effect. So, of course you need to subtract:
emitter_effect = sensor_output - environment_effect
Subtracting the environment's effect on your sensor is usually called compensation. In order to compensate, you need to be able to model or predict the effect your environment (extra metal floating around) is having on the sensor. The form of the model for your environment effect can be very simple or very complex.
Simple methods generally use a seperate sensor to estimate environment_effect. I'm not sure exactly what your scenario is, but you may be able to select a sensor which would independently measure the quantity of interference (metal) in your setup.
More complex methods can perform compensation without referring to an independent sensor for measuring inteference. For example, if you expect the distance to be at 10.0 on average with only occasional deviations, you could use that fact to estimate how much interference is present. In my experience, this type of method is less reliable; systems with independent sensors for measuring interference are more predictable and reliable.
You can start reading about Kalman filtering if you're interested in model-based estimation:
https://en.wikipedia.org/wiki/Kalman_filter
It's a complex topic, so you should expect a steep learning curve. Kalman filtering (and related Bayesian estimation methods) are the formal way to convert from "bad sensor reading" to "corrected sensor reading".

How to evaluate sine-sweep (chirp) signal for system Identification

(for the quick reader)
Question: Am I right that the Spectral analysis method to analyze a CHIRP is not so beneficial for parameter estimation/ model identification)
[EDIT]
My system is open-loop, 1 input (steering wheel angle) and 2 outputs (y-acceleration and yaw_Rate). To find vehicle characteristics I want to fit a linear transfer function to my data (Bicycle model).
My current method is the 'Spectral analysis method': using test data to estimate the FRF and therefore the transfer function, because:
For dummy data (2 transfer functions excited by a chirp steering wheel angle) this works very well: accuracy of 99.98% to refit the model. For real test data, a real vehicle. this is nowhere near correct. Even if I average the data over 11 runs. Hence my confusion/question.
[will upload images of the test data tonight for clarification]
Background
I'm working on a project where I have to perform parameter identification of a car.
In simulator based compensatory tracking experiments I would excite the 'system' (read human) with a multi-sine signal and use the instrumental variable method (and function fitting) to perform system identification (Fourier transforming in- and output; and only evaluating the excited frequencies).
However, for a human driver this may be a bit difficult to do in the car. It is easier to provide sine-sweep (or CHIRP).
Unfortunately I think this input signal is not compatible with direct frequency domain analysis, because each frequency is only excited during a specific timeframe and the Foerier transform assumes a harmonic oscilation during the entire sample-time.
I have checked some books (System Identification:A Frequency Domain Approach, System identification : an introduction and ) but can't seem to get a grip on how to use the CHIRP signal for the estimation of the Frequency Response Function (thus also the transfer function).
Short answer (for the quick reader):
It depends on what you want to do. And yes, multisine signals can have favourable properties compared to chirps.
A bit longer answer:
You ask about chirp signals and their suitability for system identification / parameter estimation. Hence I assume you focus on frequency domain identification and hence I do not comment on time domain.
If you read the book "System Identification:A Frequency Domain Approach" by Pintelon/Schoukens (try to get the second edition from 2012), you will find (cf chapter 2) that the authors favour periodic signals over nonperiodic ones (like chirps) (and they do for good reasons, as periodic signals avoid major errors like leakage).
However if your system cannot be excited by periodic signals (for whatever reason), chirp signals may be a great excitation signal. In the aviation world, test pilots are even taught to perform good chirp signals. The processing of your data may be different for chirps (take a look at chapter 7 in the Pintelon/Schoukens book).
In the end there is just one thing that makes a good excitation signal - that is it gives the desired estimation result. If chirps work for your application: Go with them!
Unfortunately I think this input signal is not compatible with direct frequency domain analysis, because each frequency is only excited during a specific timeframe and the Foerier transform assumes a harmonic oscilation during the entire sample-time.
I do not understand what you mean with your this paragraph. Can you describe your problem in more detail?
P.S.: You didn't write much about your system. Is it static or dynamic? Linear / Nonlinear? Open loop or closed loop? SISO/MIMO? Are you limited to frequency domain ID? Can you repeat experiments? Each subject should be kept in mind when you decide about the excitation.

How can I find process noise and measurement noise in a Kalman filter if I have a set of RSSI readings?

im have RSSI readings but no idea how to find measurement and process noise. What is the way to find those values?
Not at all. RSSI stands for "Received Signal Strength Indicator" and says absolutely nothing about the signal-to-noise ratio related to your Kalman filter. RSSI is not a "well-defined" things; it can mean a million things:
Defining the "strength" of a signal is a tricky thing. Imagine you're sitting in a car with an FM radio. What does the RSSI bars on that radio's display mean? Maybe:
The amount of Energy passing through the antenna port (including noise, because at this point no one knows what noise and signal are)?
The amount of Energy passing through the selected bandpass for the whole ultra shortwave band (78-108 MHz, depending on region) (incl. noise)?
Energy coming out of the preamplifier (incl. Noise and noise generated by the amplifier)?
Energy passing through the IF filter, which selects your individual station (is that already the signal strength as you want to define it?)?
RMS of the voltage observed by the ADC (the ADC probably samples much higher than your channel bandwidth) (is that the signal strength as you want to define it?)?
RMS of the digital values after a digital channel selection filter (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation and audio frequency filtering for a mono mix (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of digital values in a stereo audio signal (i.t.t.s.s.a.y.w.t.d.i?) ?
...
as you can imagine, for systems like FM radios, this is still relatively easy. For things like mobile phones, multichannel GPS receivers, WiFi cards, digital beamforming radars etc., RSSI really can mean everything or nothing at all.
You will have to mathematically define away to describe what your noise is. And then you will need to find the formula that describes your exact implementation of what "RSSI" is, and then you can deduct whether knowing RSSI says anything about process noise.
A Kalman Filter is a mathematical construct for computing the expected state of a system that is changing over time, given an initial state and noisy measurements of that system. The key to the "process noise" component of this is the fact that the system is changing. The way that the system changes is the process.
Your state might change due to manual control or due to the nature of the system. For example, if you have a car on a hill, it can roll down the hill naturally (described by the state transition matrix), or you might drive it down the hill manually (described by the control input matrix). Any noise that might affect these inputs - wind, bumps, twitches - can be described with the process noise.
You can measure the process noise the way you would measure variance in any system - take the expected dynamics and compare them with the true dynamics to generate a covariance matrix.

Phase difference between two signals?

I'm working on this embedded project where I have to resonate the transducer by calculating the phase difference between its Voltage and Current waveform and making it zero by changing its frequency. Where I(current) & V(Voltage) are the same frequency signals at any instant but not the fixed frequency signals approx.(47Khz - 52kHz). All I have to do is to calculate phase difference between these two signals. Which method will be most effective.
FFT of Two signals and then phase difference between the specific components
Or cross-correlation of two signals?
Or another if any ? Which method will give me most accurate result ? and with what resolution? Does sampling rate affects phase difference's resolution (minimum phase difference which can be sensed) ?
I'm new to Digital signal processing, in case of any mistake, correct me.
ADDITIONAL DETAILS:-
Noise In my system can be white/Gaussian Noise(Not significant) & Harmonics of Fundamental (Which might be significant one in resonant mismatch case).
Yes 4046 can be a good alternative with switching regulators. I'm working with (NCO/DDS) where I can scale/ reshape sinusoidal on ongoing basis.
Implementation of Analog filter will be very complex as I will require higher order filter with high roll-off rate for harmonic removal , so I'm choosing DSP based filter and its easy to work with MATLAB DSP Processors.
What sampling rate would you suggest for a ~50 KHz (47Khz-52KHz) system for achieving result in FFT or Goertzel with phase resolution of preferably =<0.1 degrees or less and frequency steps will vary from as small as ~1 to 2Hz . to 50 Hz-200Hz.
My frequency is variable 45KHz - 55Khz ... But will be known to my system... Knowing phase error for the last fed frequency is more desirable. After FFT AND DIGITAL FILTERING , IFFT can be performed for more noise free samples which can be used for further processing. So i guess FFT do both the tasks ...
But I'm wondering about the Phase difference accuracy cause thats the crucial part.
The Goertzel algorithm http://www.embedded.com/design/configurable-systems/4024443/The-Goertzel-Algorithm is a fairly efficient tone detection method that resolves the signal into real and imaginary components. I'll assume you can do the numeric to get the phase difference or just polarity, as you require.
Resolution versus time constant is a design tradeoff which this article highlights issues. http://www.mstarlabs.com/dsp/goertzel/goertzel.html
Additional
"What accuracy can be obtained?"
It depends...upon what you are faced with (i.e., signal levels, external noise, etc.), what hardware you have (i.e., adc, processor, etc.), and how you implement your solution (sample rate, numerical precision, etc.). Without the complete picture, I'll be guessing what you could achieve as the Goertzel approach is far from easy.
But I imagine for a high school project with good signal levels and low noise, an easier method of using the phase comparator (2 as it locks at zero degrees) of a 4046 PLL www.nxp.com/documents/data_sheet/HEF4046B.pdf will likely get you down to a few degrees.
One other issue if you have a high Q transducer is generating a high-resolution frequency. There is a method but that's another avenue.
Yet more
"Harmonics of Fundamental (Which might be significant)"... hmm hence the digital filtering;
but if the sampling rate is too low then there might be a problem with aliasing. Also, mismatched anti-aliasing filters are likely to take your whole error budget. A rule of thumb of ten times sampling frequency seems a bit low, and it being higher it will make the filter design easier.
Spatial windowing addresses off-frequency issues along with higher roll-off and attenuation and is described in this article. Sliding Spectrum Analysis by Eric Jacobsen and Richard Lyons in Streamlining Digital Signal Processing http://www.amazon.com/Streamlining-Digital-Signal-Processing-Guidebook/dp/1118278380
In my previous project after detecting either carrier, I then was interested in the timing of the frequency changes in immense noise. With carrier phase generation inconstancies, the phase error was never quiescent to be quantified, so I can't guess better than you what you might get with your project conditions.
Not to detract from chip's answer (I upvoted it!) but some other options are:
Cross correlation. Off the top of my head, I am not sure what the performance difference between that and the Goertzel algorithm will be, but both should be doable on an embedded system.
Ad-hoc methods. For example, I would try something like this: bandpass the signals to eliminate noise, find the peaks and measure the time difference between the peaks. This will probably be more efficient, and, provided you do a reasonable job throwing out outliers and handling wrap-around, should be extremely robust. The bandpass filters will, themselves, alter the phase, so you'll have to make sure you apply exactly the same filter to both signals.
If the input signal-to-noise ratios are not too bad, a computually efficient solution can be built based on zero crossing detection. Also, have a look at http://www.metrology.pg.gda.pl/full/2005/M&MS_2005_427.pdf for a nice comparison of phase difference detection algorithms, including zero-crossing ones.
Computing 1-bin of a DFT (or using the similar complex Goertzel block filter) will work if the signal frequency is accurately known. (Set the DFT bin or the Goertzel to exactly that frequency).
If the frequency isn't exactly known, you could try using an FFT with an FFTshift to interpolate the frequency magnitude peak, and then interpolate the phase at that frequency for each of the two signals. An FFT will also allow you to window the data, which may improve phase estimation accuracy if the frequency isn't exactly bin centered (or exactly the Goertzel filter frequency). Different windows may improve the phase estimation accuracy for frequencies "between bins". A Blackman-Nutall window will be better than a rectangular window, but there may be better window choices.
The phase measurement accuracy will depend on the S/N ratio, the length of time one samples the two (assumed stationary) signals, and possibly the window used.
If you have a Phase Locked Loop (PLL) that tracks each input, then you can subtract the phase coefficients (of the generator components) to determine offset between the phases. This would also be robust against noise.

Resources