Evaluating distortion due to sampling and quantization - signal-processing

How can I use Simulink to understand distortion caused by digitizing (discretizing) a continuous signal with a finite sampling frequency? Ideally I would like to get a spectrum plot showing various harmonics introduced by this process.

Related

Calculating SNR using PSD of captured signal and noise

I have captured both a transmitted signal and when there is no transmission (i.e. noise only). I would like to calculate the SNR of the signal. I would like to make sure the following GNURadio flowgraph is not wrong:
In summary, after the PSD of each is calculated, the "Integrate with Decimate over 2048" block sums up the power over the 2048 FFT bins. Then, the noise FFT sum is subtracted from the signal FFT sum. This is divided by the noise FFT sum and converted to dB.
This is the output of my flowgraph:
As calculated by my flowgraph, the power values are:
signal only, raw power: ~0.329
noise only, raw power: 0.000007
SNR in dB: ~46.6dB
I'm using a LoRa node to transmit the signal of interest; the modulation details are here: https://www.thethingsnetwork.org/docs/lorawan/#modulation-and-data-rate
The signal occupies the captured bandwidth (125k) and is sampled at 1 million samples per second.
Your flowgraph should give you the correct SNR value under the following conditions:
the signal and noise sources are uncorrelated
the "noise only" captured by the lower branch has the same characteristics (especially the same average power) as the noise included in the "signal + noise" captured by the upper branch
As an aside, unless you are also using intermediate signals for other purposes, there are a few simplifications that can be made to your flowgraph:
The multiplications up the upper and lower branches by the same constant factor will eventually cancel out in the divide block. You could save yourself the trouble of the scaling altogether.
From Parseval's theorem, the summation of the squared magnitudes in the frequency-domain is proportional to the summation of the squared samples in the time-domain. The FFT blocks would thus not be necessary.
That said, in your flowgraph you are using some intermediate signals for GUI output purposes. In this case, you could simply put the required constant scaling just before the Number Sink.

Sinusoids with frequencies that are random variales - What does the FFT impulse look like?

I'm currently working on a program in C++ in which I am computing the time varying FFT of a wav file. I have a question regarding plotting the results of an FFT.
Say for example I have a 70 Hz signal that is produced by some instrument with certain harmonics. Even though I say this signal is 70 Hz, it's a real signal and I assume will have some randomness in which that 70Hz signal varies. Say I sample it for 1 second at a sample rate of 20kHz. I realize the sample period probably doesn't need to be 1 second, but bear with me.
Because I now have 20000 samples, when I compute the FFT. I will have 20000 or (19999) frequency bins. Let's also assume that my sample rate in conjunction some windowing techniques minimize spectral leakage.
My question then: Will the FFT still produce a relatively ideal impulse at 70Hz? Or will there 'appear to be' spectral leakage which is caused by the randomness the original signal? In otherwords, what does the FFT look like of a sinusoid whose frequency is a random variable?
Some of the more common modulation schemes will add sidebands that carry the information in the modulation. Depending on the amount and type of modulation with respect to the length of the FFT, the sidebands can either appear separate from the FFT peak, or just "fatten" a single peak.
Your spectrum will appear broadened and this happens in the real world. Look e.g for the Voight profile, which is a Lorentizan (the result of an ideal exponential decay) convolved with a Gaussian of a certain width, the width being determined by stochastic fluctuations, e.g. Doppler effect on molecules in a gas that is being probed by a narrow-band laser.
You will not get an 'ideal' frequency peak either way. The limit for the resolution of the FFT is one frequency bin, (frequency resolution being given by the inverse of the time vector length), but even that (as #xvan pointed out) is in general broadened by the window function. If your window is nonexistent, i.e. it is in fact a square window of the length of the time vector, then you'll get spectral peaks that are convolved with a sinc function, and thus broadened.
The best way to visualize this is to make a long vector and plot a spectrogram (often shown for audio signals) with enough resolution so you can see the individual variation. The FFT of the overall signal is then the projection of the moving peaks onto the vertical axis of the spectrogram. The FFT of a given time vector does not have any time resolution, but sums up all frequencies that happen during the time you FFT. So the spectrogram (often people simply use the STFT, short time fourier transform) has at any given time the 'full' resolution, i.e. narrow lineshape that you expect. The FFT of the full time vector shows the algebraic sum of all your lineshapes and therefore appears broadened.
To sum it up there are two separate effects:
a) broadening from the window function (as the commenters 1 and 2 pointed out)
b) broadening from the effect of frequency fluctuation that you are trying to simulate and that happens in real life (e.g. you sitting on a swing while receiving a radio signal).
Finally, note the significance of #xvan's comment : phi= phi(t). If the phase angle is time dependent then it has a derivative that is not zero. dphi/dt is a frequency shift, so your instantaneous frequency becomes f0 + dphi/dt.

Effect of sampling rate of a signal on its Fourier Transform

I am running some experiments on MATLAB, and I have noticed that, keeping the period fixed, increasing the sampling rate of a sine signal causes the different shifted waveforms in the Fourier transform to become more distinct. They get further apart, I think this makes sense because as the sampling rate increases, the difference between the Nyquist rate and the sampling rate increases too, which creates an effect opposed to aliasing. I have also noticed that the amplitude of the peaks of the transform also increase as the sampling rate increases. Even the DC component (frequency = 0) changes. It's shown as being 0 at some sampling rate, but when increasing the sampling rate it's not 0 anymore.
All the sampling rates are above the Nyquist rate. It seems odd to me that the Fourier transform changes its shape, since according to the sampling theorem, the original signal can be recovered if the sampling rate is above the Nyquist rate, no matter if it's 2 times the nyquist rate or 20 times. Wouldn't a different Fourier waveform mean a different recovered signal?
I am wondering, formally, what's the impact of the sampling rate
Thank you.
You're conflating conversion between time-discrete and time-continuous forms of a signal with reversibility of a transform.
The only guarantee is: For a given transform of some discrete signal, its inverse transform will yield the "same" discrete signal back. The discrete signal is abstract from any frequencies. All that the transform does is take some vector of complex values, and give the dimensionally matching vector of complex values back. You can then take this vector, run an inverse transform on it, and get the "original" vector. I use quotes since there may be some numerical errors that depend on the implementation. As you can see, nowhere does the word frequency appear because it's irrelevant.
So, your real question is then, how to get an FFT with values that are useful for something besides getting the original discrete signal back through an inverse transform. Say, how to get an FFT that will tell a human something nice about the frequency content of a signal. A transform "tweaked" for human usefulness, or for use in further signal processing such as automated music transcription, can't reproduce the original signal anymore after inversion. We're trading off veracity for usefulness. Detailed discussion of this can't really fit into one answer, and is off topic here anyway.
Another of your real questions is how to go between a continuous signal and a discrete signal - how to sample the continuous signal, and how to reconstruct it from its discrete representation. The reconstruction means a function (or process) that will yield the values the signal had at points in time between the samples. Again, this is a big topic.
You are seeing several things when you increase the sample rate:
most (forward) FFT implementations have an implicit scaling factor of N (sometimes sqrt(N)) - if you're increasing your FFT size as you increase the sample rate (i.e. keeping the time window constant) then the apparent magnitude of the peaks in the FFT will increase. When calculating absolute magnitude values you would normally need to take this scaling factor into account.
I'm guessing that you are not currently applying a window function prior to the FFT - this will result in "smearing" of the spectrum, due to spectral leakage, and the exact nature of this will be very dependent on the relationship between sample rate and the frequencies of the various components in your signal. Apply a window function and the spectrum should look a lot more consistent as you vary the sample rate.

Stein Unbiased Estimate of Risk (Sure) to denoise signals

I wish to use Stein Unbiased Estimate of Risk (Sure) for denoising signals.
I have a 1-Dimensional signal. I am using wavelets to decompose the signal into multiple levels of approximate and detail coefficients.
For denoising the original signal, do I need to do a thresholding for every level of detail coefficients or doing it on the last level of detail coeffcient will do the job ?
Thresholding is usually applied to all the frequencies of a signal because the procedure exploits the fact that the wavelet transform maps white noise (purely random, uncorrelated and constant power spectral density noise) in the signal domain to white noise in the transform domain and as such is spread across the different frequencies Thus, while signal energy becomes more concentrated into fewer coefficients in the transform domain, noise energy does not. Other noises given that have different spectrum properties will map differently and this is where the selection of the type of thresholding procedure becomes important.
In thresholding the highest decomposition level (lowest frequencies) while leaving the lower levels (higher frequencies) not denoised sounds a little extrange if you want to reconstruct the signal.
However you could also extract a level and denoise its related range of frequencies (e.g. from level 1 to level 2) if you have a range of frequencies you may have interest for.
Speaking about the thresholding function be aware in any case that Sure has different results depending on the type of noises the signal has. For example it will reduce the distribution of white noise in horizontal components but will only decrease large amplitudes. For signals where togueter with white you may have other noise colors like random walk and flicker noise sure is not an efective procedure.

Primary FFT Coefficients vs Low-Pass Filter

I'm working on signal processing issues. I'm extracting some features for feeding a classifier. Among these features, there is the sum of first 5 FFT coefficients. As you know primary FFT coefficients actually indicate how dominant low frequency components of a signal are. This is very close to what a low-pass filter gives.
Here I'm suspicious about whether computing FFT to take those first 5 coefficients is an unnecessary task. I think applying low-pass filter will just eliminate low-frequency components and it won't have a significant effect on primary FFT coefficients. However there may be some other way in combination with low-pass filter in order to extract same information (that is contained in first five FFT coefficients) without using FFT.
Do you have any ideas or suggestions regarding this issue?
Thanks in advance.
If you just need an indicator for the low freq part of a signal I suggest to do something really simple. Just take an ordinary lowpass filter, for instance a 2nd order butterworth, with the cutoff frequency set appropriately (5Hz in your case, if I understood right). Then compute the energy (sum over squared values) or rms-value over your window (length 100). Or perhaps take the ratio of the low-freq energy and the overall energy of the window, to get a relative measure. That should give you a pretty good indicator for low frequency contributions of your signal.
People tend to overuse the fft for all kinds of really simple tasks. In 90% of the use cases an fft can be replaced by a simpler algorithm.
I seems you should take a look at the Goertzel Algorithm, as for the seemingly limited number of frequencies you need, it should take less computation. After updating the feedback parts on each sample, you can select how often to generate your "feature metric" or a little additional weighting of the results, can yield a respectable low pass filter.

Resources