I want to know how to calculate interference weight in the combination of APs running on different channel frequencies.
Lets say, i have 10 APs, with different modes running, like 11a, 11na and 11ac.
If 11a is running a 20MHz channel say (36), and 11na devices running with 40MHz (36 and 40), and 11ac devices running with 80MHz(36, 40,44,48).
Now how does these frequencies interfere with each other and how to calculate the interference weight among these frequencies.
First of all you should read the 802.11-2012 standard and 802.11ac amendment to understand the difference in PHY between the 3 modes. But more generally I think a more precise definition of "interference weight" or at least how you would use this measure, is needed to assist.
In practice, interference depends on many variables, is highly dynamic, and has many elements of randomness. The standard allows you to define a quantity measure called RSSI of the signal you are measuring but the actual method is proprietary and no vendor will be the same. Moreover different hardware/firmwave/drivers will measure signal and SNR differently at the exact same location and time.
IMO, all measures of signal quality are by definition averages of some kind. Interference can be more precisely defined and measured on a per-symbol basis but with millions of symbols per second this is of limited use
Related
I am a little confused about the BER. I found that the BER of 16QAM is better than that of 32QAM. is this right, if so, why we go to higher QAM (i.e. 32, 64, and etc).
thank you in advance
If one would target the best BER, you wouldn't even go up to 16QAM and stick at 4QAM / QPSK. You'll have a secure transmission, with the downside of a low spectral efficiency.
16QAM can achieve a spectral efficiency of 4 Bits/s/Hz, where 64QAM has already 6 Bits/s/Hz. This means, you can increase the bitrate by 50% compared to the previous setting. This is especially important if you have limited resources like channels or bandwidth. In Wireless transmission you'll have a bandwidth of a few MHz and there's no parallel channel for other users, so spectral efficiency is the key to increase data throughput. (In fact there's something like an parallel channel, called MIMO, but you get the key)
See the table here for an overview of wireless transmission systems and their spectral efficiency. Spectral Efficiency
Even for more robust transmission systems (in case of BER) you can pick relatively high modulation grades and use the increased number of bits for redundant information. In case of a bit error the receiver is able to repair the original content. This is called Forward Error Correction
I have an electromagnetic sensor which reports how much electromagnetic field strength it reads in space.
And I also have a device that emits electromagnetic field. It covers 1 meter area.
So I want to kind of predict position of the sensor using its reading.
But the sensor is affected by metal so it makes the position prediction drifts.
It's like if the reading is 1, and you put it near a metal, you get 2.
Something like that. It's not just noise, it's a permanent drift. Unless you remove the metal it will give reading 2 always.
What are the techniques or topics I need to learn in general to recover reading 1 from 2?
Suppose that the metal is fixed somewhere in space and I can calibrate the sensor by putting it near metal first.
You can suggest anything about removing the drift in general. Also please consider that I can have another emitter putting somewhere so I should be able to recover the true reading easier.
Let me suggest that you view your sensor output as a combination of two factors:
sensor_output = emitter_effect + environment_effect
And you want to obtain emitter_effect without the addition of environment_effect. So, of course you need to subtract:
emitter_effect = sensor_output - environment_effect
Subtracting the environment's effect on your sensor is usually called compensation. In order to compensate, you need to be able to model or predict the effect your environment (extra metal floating around) is having on the sensor. The form of the model for your environment effect can be very simple or very complex.
Simple methods generally use a seperate sensor to estimate environment_effect. I'm not sure exactly what your scenario is, but you may be able to select a sensor which would independently measure the quantity of interference (metal) in your setup.
More complex methods can perform compensation without referring to an independent sensor for measuring inteference. For example, if you expect the distance to be at 10.0 on average with only occasional deviations, you could use that fact to estimate how much interference is present. In my experience, this type of method is less reliable; systems with independent sensors for measuring interference are more predictable and reliable.
You can start reading about Kalman filtering if you're interested in model-based estimation:
https://en.wikipedia.org/wiki/Kalman_filter
It's a complex topic, so you should expect a steep learning curve. Kalman filtering (and related Bayesian estimation methods) are the formal way to convert from "bad sensor reading" to "corrected sensor reading".
I have a data set of 60 sensors making 1684 measurements. I wish to decrease the number of sensors used during experiment, and use the remaining sensor data to predict (using machine learning) the removed sensors.
I have had a look at the data (see image) and uncovered several strong correlations between the sensors, which should make it possible to remove X sensors and use the remaining sensors to predict their behaviour.
How can I “score” which set of sensors (X) best predict the remaining set (60-X)?
Are you familiar with Principal Component Analysis (PCA)? It's a child of Analysis of Variance (ANOVA). Dimensionality Reduction is another term to describe this process.
These are usually aimed at a set of inputs that predict a single output, rather than a set of peer measurements. To adapt your case to these methods, I would think that you'd want to begin by considering each of the 60 sensors, in turn, as the "ground truth", to see which ones can be most reliably driven by the remainder. Remove those and repeat the process until you reach your desired threshold of correlation.
I also suggest a genetic method to do this winnowing; perhaps random forests would be of help in this phase.
im have RSSI readings but no idea how to find measurement and process noise. What is the way to find those values?
Not at all. RSSI stands for "Received Signal Strength Indicator" and says absolutely nothing about the signal-to-noise ratio related to your Kalman filter. RSSI is not a "well-defined" things; it can mean a million things:
Defining the "strength" of a signal is a tricky thing. Imagine you're sitting in a car with an FM radio. What does the RSSI bars on that radio's display mean? Maybe:
The amount of Energy passing through the antenna port (including noise, because at this point no one knows what noise and signal are)?
The amount of Energy passing through the selected bandpass for the whole ultra shortwave band (78-108 MHz, depending on region) (incl. noise)?
Energy coming out of the preamplifier (incl. Noise and noise generated by the amplifier)?
Energy passing through the IF filter, which selects your individual station (is that already the signal strength as you want to define it?)?
RMS of the voltage observed by the ADC (the ADC probably samples much higher than your channel bandwidth) (is that the signal strength as you want to define it?)?
RMS of the digital values after a digital channel selection filter (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation and audio frequency filtering for a mono mix (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of digital values in a stereo audio signal (i.t.t.s.s.a.y.w.t.d.i?) ?
...
as you can imagine, for systems like FM radios, this is still relatively easy. For things like mobile phones, multichannel GPS receivers, WiFi cards, digital beamforming radars etc., RSSI really can mean everything or nothing at all.
You will have to mathematically define away to describe what your noise is. And then you will need to find the formula that describes your exact implementation of what "RSSI" is, and then you can deduct whether knowing RSSI says anything about process noise.
A Kalman Filter is a mathematical construct for computing the expected state of a system that is changing over time, given an initial state and noisy measurements of that system. The key to the "process noise" component of this is the fact that the system is changing. The way that the system changes is the process.
Your state might change due to manual control or due to the nature of the system. For example, if you have a car on a hill, it can roll down the hill naturally (described by the state transition matrix), or you might drive it down the hill manually (described by the control input matrix). Any noise that might affect these inputs - wind, bumps, twitches - can be described with the process noise.
You can measure the process noise the way you would measure variance in any system - take the expected dynamics and compare them with the true dynamics to generate a covariance matrix.
I'm working on this embedded project where I have to resonate the transducer by calculating the phase difference between its Voltage and Current waveform and making it zero by changing its frequency. Where I(current) & V(Voltage) are the same frequency signals at any instant but not the fixed frequency signals approx.(47Khz - 52kHz). All I have to do is to calculate phase difference between these two signals. Which method will be most effective.
FFT of Two signals and then phase difference between the specific components
Or cross-correlation of two signals?
Or another if any ? Which method will give me most accurate result ? and with what resolution? Does sampling rate affects phase difference's resolution (minimum phase difference which can be sensed) ?
I'm new to Digital signal processing, in case of any mistake, correct me.
ADDITIONAL DETAILS:-
Noise In my system can be white/Gaussian Noise(Not significant) & Harmonics of Fundamental (Which might be significant one in resonant mismatch case).
Yes 4046 can be a good alternative with switching regulators. I'm working with (NCO/DDS) where I can scale/ reshape sinusoidal on ongoing basis.
Implementation of Analog filter will be very complex as I will require higher order filter with high roll-off rate for harmonic removal , so I'm choosing DSP based filter and its easy to work with MATLAB DSP Processors.
What sampling rate would you suggest for a ~50 KHz (47Khz-52KHz) system for achieving result in FFT or Goertzel with phase resolution of preferably =<0.1 degrees or less and frequency steps will vary from as small as ~1 to 2Hz . to 50 Hz-200Hz.
My frequency is variable 45KHz - 55Khz ... But will be known to my system... Knowing phase error for the last fed frequency is more desirable. After FFT AND DIGITAL FILTERING , IFFT can be performed for more noise free samples which can be used for further processing. So i guess FFT do both the tasks ...
But I'm wondering about the Phase difference accuracy cause thats the crucial part.
The Goertzel algorithm http://www.embedded.com/design/configurable-systems/4024443/The-Goertzel-Algorithm is a fairly efficient tone detection method that resolves the signal into real and imaginary components. I'll assume you can do the numeric to get the phase difference or just polarity, as you require.
Resolution versus time constant is a design tradeoff which this article highlights issues. http://www.mstarlabs.com/dsp/goertzel/goertzel.html
Additional
"What accuracy can be obtained?"
It depends...upon what you are faced with (i.e., signal levels, external noise, etc.), what hardware you have (i.e., adc, processor, etc.), and how you implement your solution (sample rate, numerical precision, etc.). Without the complete picture, I'll be guessing what you could achieve as the Goertzel approach is far from easy.
But I imagine for a high school project with good signal levels and low noise, an easier method of using the phase comparator (2 as it locks at zero degrees) of a 4046 PLL www.nxp.com/documents/data_sheet/HEF4046B.pdf will likely get you down to a few degrees.
One other issue if you have a high Q transducer is generating a high-resolution frequency. There is a method but that's another avenue.
Yet more
"Harmonics of Fundamental (Which might be significant)"... hmm hence the digital filtering;
but if the sampling rate is too low then there might be a problem with aliasing. Also, mismatched anti-aliasing filters are likely to take your whole error budget. A rule of thumb of ten times sampling frequency seems a bit low, and it being higher it will make the filter design easier.
Spatial windowing addresses off-frequency issues along with higher roll-off and attenuation and is described in this article. Sliding Spectrum Analysis by Eric Jacobsen and Richard Lyons in Streamlining Digital Signal Processing http://www.amazon.com/Streamlining-Digital-Signal-Processing-Guidebook/dp/1118278380
In my previous project after detecting either carrier, I then was interested in the timing of the frequency changes in immense noise. With carrier phase generation inconstancies, the phase error was never quiescent to be quantified, so I can't guess better than you what you might get with your project conditions.
Not to detract from chip's answer (I upvoted it!) but some other options are:
Cross correlation. Off the top of my head, I am not sure what the performance difference between that and the Goertzel algorithm will be, but both should be doable on an embedded system.
Ad-hoc methods. For example, I would try something like this: bandpass the signals to eliminate noise, find the peaks and measure the time difference between the peaks. This will probably be more efficient, and, provided you do a reasonable job throwing out outliers and handling wrap-around, should be extremely robust. The bandpass filters will, themselves, alter the phase, so you'll have to make sure you apply exactly the same filter to both signals.
If the input signal-to-noise ratios are not too bad, a computually efficient solution can be built based on zero crossing detection. Also, have a look at http://www.metrology.pg.gda.pl/full/2005/M&MS_2005_427.pdf for a nice comparison of phase difference detection algorithms, including zero-crossing ones.
Computing 1-bin of a DFT (or using the similar complex Goertzel block filter) will work if the signal frequency is accurately known. (Set the DFT bin or the Goertzel to exactly that frequency).
If the frequency isn't exactly known, you could try using an FFT with an FFTshift to interpolate the frequency magnitude peak, and then interpolate the phase at that frequency for each of the two signals. An FFT will also allow you to window the data, which may improve phase estimation accuracy if the frequency isn't exactly bin centered (or exactly the Goertzel filter frequency). Different windows may improve the phase estimation accuracy for frequencies "between bins". A Blackman-Nutall window will be better than a rectangular window, but there may be better window choices.
The phase measurement accuracy will depend on the S/N ratio, the length of time one samples the two (assumed stationary) signals, and possibly the window used.
If you have a Phase Locked Loop (PLL) that tracks each input, then you can subtract the phase coefficients (of the generator components) to determine offset between the phases. This would also be robust against noise.