How to remove transient of radio signal - signal-processing

I tried to capture pulse tone from one of the radio and tried to find its frequency using FFT. But radio send some transient before the transmission of original pulse that's why i receive transient frequency as well that is not needed . I capture same pulses on oscilloscope and found following pulse sequence
Transient pulse is from 0-3.8 m sec then there is a dead cycle of around 0-4 m sec and then original pulse is received.
Can anyone please guide me how i remove these transient and only detect my original frequency.

Related

How does ultra wide band determine position?

Apple iPhone's now have a U1 chip which is described as "Ultra Wideband technology for spatial awareness". I've heard the technology can do time of flight calculations to determine range, but that doesn't answer how it determines relative position. How does the positioning work?
How does ultra-wideband work?
Travelling at the speed of light
The idea is to send radio waves from one module to another and measure the time of flight (TOF), or in other words, how long it takes. Because radio waves travel at the speed of light (c = 299792458 m/s) we can simply divide the time of flight by this speed to get the distance.
However, perhaps you've noticed that the radio waves travel fast. Very fast! In a single nanosecond, which is a billionth of a second, a wave has travelled almost 30 cm. So if we want to perform centimetre-accurate ranging, we have to be able to measure the timing very very accurately! So now the question is, how can we do this? How can we even measure the timing of.. a wave?
It's all about the bandwidth
In physics, there is something called the Heisenberg's uncertainty principle. The principle states that it is impossible to know both the frequency and the timing of a signal. Consider for example a sinusoid; a signal with a well-known frequency but a very ill-determined timing: the signal has no beginning or end. However, if we combine multiple sinusoidal signals with a slightly different frequency, we can create a 'pulse' with more defined timing, i.e., the peak of the pulse. This is seen in the following figure from Wikipedia that sequentially adds sinusoids to a signal to get a sharper pulse:
fig. 1
The range of frequencies that are used for this signal is called the bandwidth Δf. Using Heisenberg's uncertainty principle we can roughly determine the width Δt of the pulse, given a certain bandwidth Δf*:
ΔfΔt ≥ 1/4π
From this simple formula we can see that if we want a narrow pulse, which is necessary for accurate timing, we need to use a large bandwidth. For example, using the bandwidth Δf = 20 MHz available for wifi systems we obtain a pulse-width larger than Δt ≥ 4ns. Using the speed of light this relates to a pulse of 1.2 m 'long' which is too much for accurate ranging. Firstly, because it is hard to accurately determine the peak of such a wide pulse, and secondly because of reflections. Reflections come from the signals bouncing onto objects (walls, ceilings, closets, desks, etc..) within the surrounding environment. These reflections are also captured by the receiver and may overlap with the line-of-sight pulse which makes it very hard to measure the true peak of the pulse. With pulses of 4 ns wide, any object within 1.2 m of the receiver or the transmitter will cause an overlapping pulse. Because of this, ranging from wifi using time-of-flight is not suitable for indoor applications.
The ultra-wideband signals have typically a bandwidth of 500 MHz resulting in pulses of 0.16 ns wide! This timing resolution is so fine that at the receiver, we are able to distinguish several reflections of the signal. Hence, it remains possible to do accurate ranging even in places with a lot of reflectors, such as indoor environments.
fig. 2
Where to put all this bandwidth?
So we need a lot of bandwidth. Unfortunately, everybody wants it: in wireless communication systems, more bandwidth means faster downloads. However, if everybody would transmit signals on the same frequencies, all the signals would interfere and nobody would be able to receive anything meaningful. Because of this, the use of the frequency spectrum is highly regulated.
So how is it possible that UWB gets 500 MHz of precious bandwidth and most other systems have to satisfy with a lot less? Well, the UWB systems are only allowed to transmit at very low power (the power spectrum density must be below -41.3 dBm/MHz). This very strict power constraint means that a single pulse is not able to reach far: at the receiver, the pulse will likely be below the noise level. In order to solve this issue, a train of pulses is sent by the transmitter (typically 128 of 1024) to represent a single bit of information. At the receiver, the received pulses are accumulated and with enough pulses, the power of the 'accumulated pulse' will rise above the noise level and reception is possible. Hooray!
The IEEE 802.15.4 standard for Low-Rate Wireless Personal Area Networks has defined a number of UWB channels of at least 500MHz wide. Depending on the country you're in, some of these channels are either allowed or not. In general, the lower band channels (1 to 4) can be used in most countries under some limitations on update rate (using mitigation techniques). Channel 5 is accepted in most parts of the world without any limitations, with the notable exception of Japan. Purely from physics, the lower the channel center frequency, the better the range.
‍
‍
A note on the received signal strength (RSS)
There exists another way to measure the distance between two points by using radio waves, and that is by using the received signal strength. The further the two points are, the smaller the received signal strength will be. Hence, from this RSS-value, we should be able to derive the distance. Unfortunately, it's not that simple. In general, the received signal strength will be a combination of the power of all the reflections and not only of the desired line-of-sight. Because of this, it becomes very hard to relate the RSS value to the true distance. The figure below shows just how bad it is.
In this figure, the RSS value of a Bluetooth signal is measured at certain distances. At every distance, the error bars show how the RSS value behaves at the given distance. Clearly, the variation on the RSS value is very large which makes RSS unsuitable for accurate ranging or positioning.
‍
Source

How to upsample audio with digital interpolation

I want to take an array with N number of audio data points and upsample it such that there are L*N points. I understand an accurate way to do this is to pad L-1 zero points between each original point and then to low pass the signal. According to this 4 minute video https://www.youtube.com/watch?v=sJslC6TuCoc I should lowpass at a frequency of Pi / L and then add a gain of L to the result to properly upsample my signal. I am having trouble with this low passing step and my result audio signal is not audible at all. Can anyone help me here? Is this "low pass" really more like a band reject filter or something?
My low pass algorithm is noted here (biquad transfer function with coefficients marked under "LPF"): http://music.columbia.edu/pipermail/music-dsp/1998-October/054185.html
You can interpolate all the added points using a high quality interpolation algorithm, such as a polyphase windowed Sinc FIR filter.

How can I select an optimal window for Short Time Fourier Transform?

I want to select an optimal window for STFT for different audio signals. For a signal with frequency contents from 10 Hz to 300 Hz what will be the appropriate window size ? similarly for a signal with frequency contents 2000 Hz to 20000 Hz, what will  be the optimal window size ?
I know that if a window size is 10 ms then this will give you a frequency resolution of about 100 Hz. But if the frequency contents in the signal lies from 100 Hz to 20000 HZ then 10 ms will be appropriate window size ? or we should go for some  other window size because of 20000 Hz frequency content in a signal ?
I know the classic "uncertainty principle" of the Fourier Transform. You can either have high resolution in time or high resolution in frequency but not both at the same time. The window lengths allow you to trade off between the two.
Windowed analysis is designed for quasi-stationary signals. Quasi-stationary signals are signals which change over time but on some short period of time they might be considered stable.
One example of quasi-stationary signal is speech. Frequency components of this signal change over time when position of tongue and mouth changes, but on a short period of time approximately 0.01s they might be considered stable because tongue does not move this fast. The range of 0.01s is determined by our biology, we just can't move tongue faster than that.
Another example is music. When you touch the string you might consider it produces more or less stable sound for some short period of time. Usually 0.05 seconds. Within this period you might consider sound stable.
There might be other types of signals, for example, it might have frequency 10Ghz and be quasi-stationary of 1ms of time.
Windowed analysis allows to capture both stationary properties of signal and change of signal over time. Here it does not matter what sample rate does signal have, what frequency resolution do you need or what are the main harmonics. Are main harmonics near 100Hz or near 3000Hz. It is important on what period of time the signal is stationary and on what it can be considered as changing.
So for speech 25ms window is good just because speech is quasi-stationary on that range. For music you usually take longer windows because our fingers are moving slower than our mouth. You need to study your signal to decide optimal window length or you need to provide more information about it.
You need to specify your "optimality" criteria.
For a desired frequency resolution, you need a length or window size of roughly Fs/df (or from a fraction to twice that length or more, depending on S/N and window). However the length also needs to be similar to or shorter than the length of time during which your signal is stationary within your desired frequency resolution bounds. This may not be possible or known, thus requiring you to specify which criteria (df vs. dt) is more important for your desired "optimality".
If multiple window lengths meet your criteria, then the shortest length that is a multiple of very small primes is likely to be the most computationally efficient for the following FFTs within an STFT computational sequence.
Based on the sampling theorem, the sampling frequency needs to be larger than twice the highest frequency of the signal. And based on DFT (discrete Fourier Transform), we also know that the frequency resolution is the inverse of the entire signal duration, and the the entire frequency span is the inverse of the time resolution. Note that the frequency is simply the inverse of the period, thus the relationships go inversely with each other.
Frequency resolution = 1 / (overall time duration)
Frequency span = 1 / (time resolution)
Having said that, to process 20kHz audio signal, we need to sample in 40kHz. And if we want to get the frequency resolution down, say to 10Hz, we will need to sample the entire duration as long as 0.1Sec, which is 1/10Hz.
This is the reason we normally see that audio files are said to be 44k. Because the human hearing range is limited to 20kHz. To add some margin to it, we use 44k sampling frequency in stead of 40kHz.
I think the uncertainty principle goes with the fact that more localized signal in one domain, actually spread out on the other. For example, a pulse in time domain goes from negative infinity to positive infinite, i.e the entire stretch of the spectrum. And vice versa that the a single frequency signal in spectrum stretches from negative infinity to positive infinite in time domain. This is simply because we had to go forever in order to know if a signal could be a pure sinusoidal signal or not.
But for DFT, we can always get the frequency span if we sample twice the highest frequency of the signal, and the resolution we want if we sample the signal duration long enough. So, not so uncertain as the uncertainty principle says, as long as we know how many samples to take and how fast and how long to take them.

Analyzing Pulse Train

I am trying to post-process pulse train data. It is a 0 to 5V square wave, where the frequency of pulses corresponds to a physical measurement. During measuring, I may see anywhere from 100 pulses/second to 10,000 pulses per second. The duty cycle changes.
I wrote a pulse counter function to analyze the pulse data in the time domain, but the result was very noisy. I suspect that an FFT may be appropriate, though I have never really done anything like this before.
Has anybody done anything similar? What would be the broad methodology behind analyzing the pulse train in the frequency domain? Would it be best to take an FFT at specific time intervals (for instance every seconds worth of data)?
An FFT might be useful if your pulse train were stationary over some interval (the length of an FFT) and embedded in noise. Otherwise, why not just use at the reciprocal of the time between rising edges?

Why in FFT output of 1Hz sine wave, does 1Hz magnitude behave like a sine wave?

I have been developing a small software in .NET that takes a signal from a sensor in real time and takes the FFT of that signal which is also shown in real time.
I have used the alglib library for the FFT function. Now my purpose is to observe the intensity of some particular frequency in time.
In order to check the software, I provided a sine wave to its input having a frequency of 1 Hz. The following image shows the screen shot from the software. The upper graph shows the frequency spectrum showing the peak at 1 Hz. However, when this peak is observed in time, as shown in lower graph, the intensity behaves like a sine wave.
My sampling frequency is 30kHz. What I do not understand is how am I getting this sine signal and why is the magnitude of frequency behaving like this?
This is an example of the effects of Windowing. It derives from the fact that the FFT is not a precise operation except for when dealing with perfectly periodic signals. When you window your signal, you turn it into a smaller chunk that may not repeat perfectly. The FFT algorithm calculates the spectrum of this chunk of audio, repeated infinitely. Since it is not a perfect sine wave, you don't get an exact value for the result. Furthermore, we can see that if your window doesn't line up perfectly with a multiple of your signal frequency, then it will phase shift with respect to your signal, the window capturing a slightly different chunk of your signal, and the FFT calculating the spectrum of a different infinitely repeated signal. If you think about it, this phase difference will naturally be periodic as well, as the window catches up with the next period of your signal.
However, this would only explain smaller variations in the intensity. Assuming you used correct labels on the axes of the bottom graph (something you should double-check), something else is wrong. You're window might be too small (although I expect not, because then you would see more spectral bleeding). Another possibility that just occurred to me is that you might just be plotting the real part of the FFT, not the magnitude. As the phase changes, the real and complex parts might vary, but you'd expect the magnitude to stay roughly the same.

Resources