Advantages of OS-CFAR over CA-CFAR? - signal-processing

In RADAR signal processing we use Constant False Alram Rate(CFAR) algorithm to identify the target from the background noise.
Most widespread algorithms are
Cell Averaging CFAR
Ordered Statistics CFAR
Whats the advantages of OS-CFAR over CA-CFAR?

[ADVANTAGE OF OS-CFAR OVER CA-CFAR]
Good for scanning target in multitarget environment
Give better result of scanning result ( target and their positions)
[DISADVANTAGE OF OS-CFAR OVER CA-CFAR]
The OS-CFAR time process for radar signal processing is longer than CA-CFAR
You can see this journal/link for more information about the comparison between OS-CFAR and CA-CFAR:
https://ieeexplore.ieee.org/document/9163697
https://www.sciencedirect.com/science/article/pii/S187705091830560X

Related

How to normalize amplitude differeces within the 433Mhz signal burst in GNU Radio Companion?

I'm learning SDR by trying to decode different 433Mhz keyfob signals.
My initial flow for capture and pre-processing looks like this:
What I get on the sink is:
I guess the bits are visible fine and I could proceed to decode it somehow. But I am worried about the big difference in the amplitude: the very beginning of the burst has much higher amplitude in comparison to the rest of the packet. This picture is very consistent (I was not able to get bursts with more balanced amplitudes). If I was speaking about music recording I would look for a compression method. But I don't know what the equivalent in the SDR world is.
I'm not sure if this will be a problem when I'll try to quadrature demod, binary slice and/or clock recover.
Is this a known problem and what is the approach to eliminate it within GNU Radio Companion?

Python: time stretch wave files - comparison between three methods

I'm doing some data augmentation on a speech dataset, and I want to stretch/squeeze each audio file in the time domain.
I found the following three ways to do that, but I'm not sure which one is the best or more optimized way:
dimension = int(len(signal) * speed)
res = librosa.effects.time_stretch(signal, speed)
res = cv2.resize(signal, (1, dimension)).squeeze()
res = skimage.transform.resize(signal, (dimension, 1)).squeeze()
However, I found that librosa.effects.time_stretch adds unwanted echo (or something like that) to the signal.
So, my question is: What are the main differences between these three ways? And is there any better way to do that?
librosa.effects.time_stretch(signal, speed) (docs)
In essence, this approach transforms the signal using stft (short time Fourier transform), stretches it using a phase vocoder and uses the inverse stft to reconstruct the time domain signal. Typically, when doing it this way, one introduces a little bit of "phasiness", i.e. a metallic clang, because the phase cannot be reconstructed 100%. That's probably what you've identified as "echo."
Note that while this approach effectively stretches audio in the time domain (i.e., the input is in the time domain as well as the output), the work is actually being done in the frequency domain.
cv2.resize(signal, (1, dimension)).squeeze() (docs)
All this approach does is interpolating the given signal using bilinear interpolation. This approach is suitable for images, but strikes me as unsuitable for audio signals. Have you listened to the result? Does it sound at all like the original signal only faster/slower? I would assume not only the tempo changes, but also the frequency and perhaps other effects.
skimage.transform.resize(signal, (dimension, 1)).squeeze() (docs)
Again, this is meant for images, not sound. Additionally to the interpolation (spline interpolation with the order 1 by default), this function also does anti-aliasing for images. Note that this has nothing to do with avoiding audio aliasing effects (Nyqist/Aliasing), therefore you should probably turn that off by passing anti_aliasing=False. Again, I would assume that the results may not be exactly what you want (changing frequencies, other artifacts).
What to do?
IMO, you have several options.
If what you feed into your ML algorithms ends up being something like a Mel spectrogram, you could simply treat it as image and stretch it using the skimage or opencv approach. Frequency ranges would be preserved. I have successfully used this kind of approach in this music tempo estimation paper.
Use a better time_stretch library, e.g. rubberband. librosa is great, but its current time scale modification (TSM) algorithm is not state of the art. For a review of TSM algorithms, see for example this article.
Ignore the fact that the frequency changes and simply add 0 samples on a regular basis to the signal or drop samples on a regular basis from the signal (much like your image interpolation does). If you don't stretch too far it may still work for data augmentation purposes. After all the word content is not changed, if the audio content has higher or lower frequencies.
Resample the signal to another sampling frequency, e.g. 44100 Hz -> 43000 Hz or 44100 Hz -> 46000 Hz using a library like resampy and then pretend that it's still 44100 Hz. This still change the frequencies, but at least you get the benefit that resampy does proper filtering of the result so that you avoid the aforementioned aliasing, which otherwise occurs.

8dpsk. How to "improve" constellation diagram?

I try to demodulate 8dpsk in software. Carrier frequency=1800 Hz, modulation rate=1600 bauds, i.e. itu-t v.27. Demodulator has the following properties:
Input passband hardware filter 50...3600 Hz;
sampling frequency - 9600 Hz;
Matched filter - RRC, Beta=1;
Timing recovery- simply Gardner,first order loop, 2
points per symbol.
Also, demodulator has interpolator to interpolate between matched filter outputs.
Physical line is short and I believe AWGN amount must be relative small. Demodulator works without errors, but constellation diagram looks ugly (see picture). Can anyone tell me how to "improve" constellation diagram?
I have placed interpolator before matched filter to minimize ISI and apply equalizer. Now it looks much better.
Your question has not general answer, but I want to suggest some solution.
At first, it is important what filter do you use in your Gardner algorithm to calculate jitter and mu and how you set the loop filter coefficients. It seems that it will improve your results.
Second, the RRC filter roll of factor is very important to get better results.
Third, if you revise the frequency offset before timing recovery, you get better results in time to lock and the constellation density. For doing this solution, you should feedback the estimated phase in PLL to the Timing recovery input and compensate phase offset for symbol synchronization.
Fourth, the interpolation that you use for symbol extraction from samples is important.
I write these suggestions in order that seems is useful for your constellation.

Frequency analysis of very short signal in GNU Octave

I have some very short signals from oscilloscope (50k-200k samples) registered over about 2ms time length. Those are acoustic signals with registered signal of a spark of ESD (electrostatic discharge).
I'd like to get some frequency data of that signal, in near-acoustic frequency range (up to about 30kHz) with as high time resolution as possible.
I have tried ploting a spectrogram (specgram in Octave) to view the signal, but the output is not really usefull. Using specgram( x, N, fs );, where x is my signal of fs sampling rate, I receive plot starting at very high frequencies of about 500MHz for low values of N and I get better frequency resolution for big N values (like 2^12-13) but the window is too wide and I receive only 2 spectrum values over whole signal length.
I understand that it may be the limitation of Fourier transform which is probably used by the specgram function (actually, I don't know much about signal analysis).
Is there any other way to get some frequency (as a function of time) information of that kind of signal? I've read something about wavelets, but when I tried using dwt function of signal package, I received this error:
error: 'wfilters' undefined near line 51 column 14
error: called from
dwt at line 51 column 12
Even if this would work, I am not so sure if I'd know how to actually use the output of those wavelet functions ...
To get audio frequency information from such a high sample rate, you will need obtain a sample vector long enough to contain at least a few whole cycles at audio frequencies, e.g. many 10's of milliseconds of contiguous samples, which may or may not be more than your scope can gather. To reasonably process this amount of data, you might low pass filter the sample data to just contain audio frequencies, and then resample it to a lower sample rate, but above twice that filter cut-off frequency. Then you will end up with a much shorter sample vector to feed an FFT for your audio spectrum analysis.

Software Phase Locked Loop example code needed

Does anyone know of anywhere I can find actual code examples of Software Phase Locked Loops (SPLLs) ?
I need an SPLL that can track a PSK modulated signal that is somewhere between 1.1 KHz and 1.3 KHz. A Google search brings up plenty of academic papers and patents but nothing usable. Even a trip to the University library that contains a shelf full of books on hardware PLL's there was only a single chapter in one book on SPLLs and that was more theoretical than practical.
Thanks for your time.
Ian
I suppose this is probably too late to help you (what did you end up doing?) but it may help the next guy.
Here's a golfed example of a software phase-locked loop I just wrote in one line of C, which will sing along with you:
main(a,b){for(;;)a+=((b+=16+a/1024)&256?1:-1)*getchar()-a/512,putchar(b);}
I present this tiny golfed version first in order to convince you that software phase-locked loops are actually fairly simple, as software goes, although they can be tricky.
If you feed it 8-bit linear samples on stdin, it will produce 8-bit samples of a sawtooth wave attempting to track one octave higher on stdout. At 8000 samples per second, it tracks frequencies in the neighborhood of 250Hz, just above B below middle C. On Linux you can do this by typing arecord | ./pll | aplay. The low 9 bits of b are the oscillator (what might be a VCO in a hardware implementation), which generates a square wave (the 1 or -1) which gets multiplied by the input waveform (getchar()) to produce the output of the phase detector. That output is then low-pass filtered into a to produce the smoothed phase error signal which is used to adjust the oscillation frequency of b to push a toward 0. The natural frequency of the square wave, when a == 0, is for b to increment by 16 every sample, which increments it by 512 (a full cycle) every 32 samples. 32 samples at 8000 samples per second are 1/250 of a second, which is why the natural frequency is 250Hz.
Then putchar() takes the low 8 bits of b, which make up a sawtooth wave at 500Hz or so, and spews them out as the output audio stream.
There are several things missing from this simple example:
It has no good way to detect lock. If you have silence, noise, or a strong pure 250Hz input tone, a will be roughly zero and b will be oscillating at its default frequency. Depending on your application, you might want to know whether you've found a signal or not! Camenzind's suggestion in chapter 12 of Designing Analog Chips is to feed a second "phase detector" 90° out of phase from the real phase detector; its smoothed output gives you the amplitude of the signal you've theoretically locked onto.
The natural frequency of the oscillator is fixed and does not sweep. The capture range of a PLL, the interval of frequencies within which it will notice an oscillation if it's not currently locked onto one, is pretty narrow; its lock range, over which it will will range in order to follow the signal once it's locked on, is much larger. Because of this, it's common to sweep the PLL's frequency all over the range where you expect to find a signal until you get a lock, and then stop sweeping.
The golfed version above is reduced from a much more readable example of a software phase-locked loop in C that I wrote today, which does do lock detection but does not sweep. It needs about 100 CPU cycles per input sample per PLL on the Atom CPU in my netbook.
I think that if I were in your situation, I would do the following (aside from obvious things like looking for someone who knows more about signal processing than I do, and generating test data). I probably wouldn't filter and downconvert the signal in a front end, since it's at such a low frequency already. Downconverting to a 200Hz-400Hz band hardly seems necessary. I suspect that PSK will bring up some new problems, since if the signal suddenly shifts phase by 90° or more, you lose the phase lock; but I suspect those problems will be easy to resolve, and it's hardly untrodden territory.
This is an interactive design package
for designing digital (i.e. software)
phase locked loops (PLLs). Fill in the
form and press the ``Submit'' button,
and a PLL will be designed for you.
Interactive Digital Phase Locked Loop Design
This will get you started, but you really need to understand the fundamentals of PLL design well enough to build it yourself in order to troubleshoot it later - This is the realm of digital signal processing, and while not black magic it will certainly give you a run for your money during debugging.
-Adam
Have Matlab with Simulink? There are PLL demo files available at Matlab Central here. Matlab's code generation capabilities might get you from there to a PLL written in C.

Resources