Implementing a phase demodulator in software - signal-processing

I'm currently attempting to send and receive some BPSK modulated data through sound. Currently, I'm using goertzel's algorithm as a bandpass filter for demodulation. I have no formal training in signal processing.
Given a sample rate of 44100Hz and a bucket size of 100, my intuition says that generating a wave at a frequency multiple of 441hz should result in me picking up a relatively constant phase. At other frequencies, the phase i detect should drift.
However, my current implementation shows a drift in phase when detecting a generated sound wave over the course of a second (around 90 degrees). Is this to be expected or a sign of a flaw in my implementation of goertzels?
Furthermore, is there a better, perhaps obvious way to detect the phase of a wave at a specific frequency then using goertzels?

A slow phase drift can be the result of a small difference in the clock frequencies of the transmitter and receiver. This is to be expected.
Usually BPSK data is differentially encoded so you only need to detect the moments when the phase shifts by 180 degrees, and any slow phase drift or offset can be easily ignored.

You will need to perform some form of carrier recovery and symbol recovery to track and correct offsets in transmitter and receiver clocks

Related

How to generate a waveform table for quicker realtime audio synthesis

I developed an app a few months back for iOS devices that generates real-time harmonic rich drones. It works fine on newer devices, but it's running into buffer underruns on slower devices. I need to optimize this thing and need some mental help. Here's a super basic overview of what I'm currently doing:
Create an "Oscillator Bank" that consists of X number of harmonics (simply calculated from a given fundamental frequency. Nothing fancy here.)
Inside my DAC function that spits out samples to an iOS audio buffer, I call a "GetNextSample()" function that goes through the bank of sine oscillators, calculates the sample for each one and adds them up. Some simple additive synthesis.
Enjoy the beauty of the drone.
Again, it works great, until it doesn't. I'd like to optimize this thing so I'm not using brute additive synthesis of real-time calculated sine waves. If I limit the number of harmonics ("banks") to 2, it'll work on the older devices. Not cool. On the newer devices, it underruns around 50 harmonics. Not too bad. But if I want to play multiple drones at once to create some chords, that's too much processing power.... so...
Should I generate waveform tables to just loop through instead of constant calculation? (I assume yes...)
Should I convert my usage of double-precision floating point to integer based calculations? (I assume yes...)
And my big algorithmic question (being pretty non-mathematical):
If I use a waveform table, how do I accurately determine how long the wave/table should be?? In my experience developing this app, if I just go to the end of a period (2*PI) and start over again, resetting the phase back to 0, I get a sound artifact, since I'm force offsetting the phase. In other words, I can't guarantee that one period will give me the right results...
Maybe I'm over complicating things... What's the standard way of doing quick, processor friendly real-time synth of multiple added sines?
I'll keep poking around in the meantime.
Thanks!
Have you (or can you, not an iOS person) increase the buffer size? Might give you enough overhead that you do not need this. Otherwise yes wave-table synthesis is a viable approach. You could calculate a new wavetable from the sum of all the harmonics only when a parameter changes.
I have written such a beast in golang on server side ... for starters yes use single precision floating point
To address table population, I would assure your implementation is solid by having it synthesize a square wave. Visualize the output for each run as you give it each additional frequency (with its corresponding parms of amplitude and phase shift) ... by definition a single cycle is enough as long as you are correctly using enough cycles to cover the time period of a sample
Its important to leverage the knowledge that generating an output curve from an input set of sine waves ( each with freq, amplitude, phase shift) lends itself to doing the reverse ... namely perform a FFT on that output curve to have the api give you its version of the underlying sine waves (again each with a freq, amplitude and phase) ... this will confirm your system is accurate
The name of the process you are implementing is : inverse Fourier transform and there are libraries for this however I too prefer rolling my own

Liftering cutoff

Is there a rule of thumb in deciding the cut-off value when performing the low time liftering process in Cepstral analysis? Or is it just trial and error analysis?
I am trying to calculate the spectral envelope of the frequency response of data obtained from a vibration sensor. Sampling frequency is 5000 Hz.
I had done a project on cepstral analysis and found that 15 or 20 is the low-time liftering cut-off.
To my application 15 seemed to be pretty good.
Both the values are acceptable and works good.
But think for your application.
And I don't think there exist any rule or formula to calculate it.
Because it works well for all kinds of values and signals.
So it must be trail and error by someone who ended up in this value.

Objective-C normalise dB level from iPhone's built in mic

I am building a dB meter as part of an app I am creating, I have got it receiving the peak and average power's from the mic on my iPhone (values ranging from -60 to 0.4) and now I need to figure out how to convert these power levels into db levels like the ones in this chart http://www.gcaudio.com/resources/howtos/loudness.html
Does anyone have any idea how I could do this? I can't figure an algorithm out for the life of me and it is kind of crucial as the whole point of the app is to do with real word audio levels, if that makes sense.
Any help will be greatly appreciated.
Apple's done a pretty good job at making the frequency response of the microphone flat and consistent between devices, so the only thing you'll need to determine is a calibration point. For this you will require a reference audio source and calibrated Sound Pressure level meter.
It's worth noting that sound pressure measurements are often measured against the A-weighting scale. This is frequency weighted for the human aural system. In order to measure this, you will need to apply the relevant filter curve to results taken form the microphone.
Also be aware of the difference between peak and mean (in this case RMS) measurements.
As far as I can tell from looking at the documentation, the "power level" returned from an AVAudioRecorder (assuming that's what you're using – you didn't specify) is already in decibels. See here from the documentation for averagePowerForChannel:
Return Value
The current average power, in decibels, for the sound
being recorded. A return value of 0 dB indicates full scale, or
maximum power; a return value of -160 dB indicates minimum power (that
is, near silence).

Phase difference between two signals?

I'm working on this embedded project where I have to resonate the transducer by calculating the phase difference between its Voltage and Current waveform and making it zero by changing its frequency. Where I(current) & V(Voltage) are the same frequency signals at any instant but not the fixed frequency signals approx.(47Khz - 52kHz). All I have to do is to calculate phase difference between these two signals. Which method will be most effective.
FFT of Two signals and then phase difference between the specific components
Or cross-correlation of two signals?
Or another if any ? Which method will give me most accurate result ? and with what resolution? Does sampling rate affects phase difference's resolution (minimum phase difference which can be sensed) ?
I'm new to Digital signal processing, in case of any mistake, correct me.
ADDITIONAL DETAILS:-
Noise In my system can be white/Gaussian Noise(Not significant) & Harmonics of Fundamental (Which might be significant one in resonant mismatch case).
Yes 4046 can be a good alternative with switching regulators. I'm working with (NCO/DDS) where I can scale/ reshape sinusoidal on ongoing basis.
Implementation of Analog filter will be very complex as I will require higher order filter with high roll-off rate for harmonic removal , so I'm choosing DSP based filter and its easy to work with MATLAB DSP Processors.
What sampling rate would you suggest for a ~50 KHz (47Khz-52KHz) system for achieving result in FFT or Goertzel with phase resolution of preferably =<0.1 degrees or less and frequency steps will vary from as small as ~1 to 2Hz . to 50 Hz-200Hz.
My frequency is variable 45KHz - 55Khz ... But will be known to my system... Knowing phase error for the last fed frequency is more desirable. After FFT AND DIGITAL FILTERING , IFFT can be performed for more noise free samples which can be used for further processing. So i guess FFT do both the tasks ...
But I'm wondering about the Phase difference accuracy cause thats the crucial part.
The Goertzel algorithm http://www.embedded.com/design/configurable-systems/4024443/The-Goertzel-Algorithm is a fairly efficient tone detection method that resolves the signal into real and imaginary components. I'll assume you can do the numeric to get the phase difference or just polarity, as you require.
Resolution versus time constant is a design tradeoff which this article highlights issues. http://www.mstarlabs.com/dsp/goertzel/goertzel.html
Additional
"What accuracy can be obtained?"
It depends...upon what you are faced with (i.e., signal levels, external noise, etc.), what hardware you have (i.e., adc, processor, etc.), and how you implement your solution (sample rate, numerical precision, etc.). Without the complete picture, I'll be guessing what you could achieve as the Goertzel approach is far from easy.
But I imagine for a high school project with good signal levels and low noise, an easier method of using the phase comparator (2 as it locks at zero degrees) of a 4046 PLL www.nxp.com/documents/data_sheet/HEF4046B.pdf will likely get you down to a few degrees.
One other issue if you have a high Q transducer is generating a high-resolution frequency. There is a method but that's another avenue.
Yet more
"Harmonics of Fundamental (Which might be significant)"... hmm hence the digital filtering;
but if the sampling rate is too low then there might be a problem with aliasing. Also, mismatched anti-aliasing filters are likely to take your whole error budget. A rule of thumb of ten times sampling frequency seems a bit low, and it being higher it will make the filter design easier.
Spatial windowing addresses off-frequency issues along with higher roll-off and attenuation and is described in this article. Sliding Spectrum Analysis by Eric Jacobsen and Richard Lyons in Streamlining Digital Signal Processing http://www.amazon.com/Streamlining-Digital-Signal-Processing-Guidebook/dp/1118278380
In my previous project after detecting either carrier, I then was interested in the timing of the frequency changes in immense noise. With carrier phase generation inconstancies, the phase error was never quiescent to be quantified, so I can't guess better than you what you might get with your project conditions.
Not to detract from chip's answer (I upvoted it!) but some other options are:
Cross correlation. Off the top of my head, I am not sure what the performance difference between that and the Goertzel algorithm will be, but both should be doable on an embedded system.
Ad-hoc methods. For example, I would try something like this: bandpass the signals to eliminate noise, find the peaks and measure the time difference between the peaks. This will probably be more efficient, and, provided you do a reasonable job throwing out outliers and handling wrap-around, should be extremely robust. The bandpass filters will, themselves, alter the phase, so you'll have to make sure you apply exactly the same filter to both signals.
If the input signal-to-noise ratios are not too bad, a computually efficient solution can be built based on zero crossing detection. Also, have a look at http://www.metrology.pg.gda.pl/full/2005/M&MS_2005_427.pdf for a nice comparison of phase difference detection algorithms, including zero-crossing ones.
Computing 1-bin of a DFT (or using the similar complex Goertzel block filter) will work if the signal frequency is accurately known. (Set the DFT bin or the Goertzel to exactly that frequency).
If the frequency isn't exactly known, you could try using an FFT with an FFTshift to interpolate the frequency magnitude peak, and then interpolate the phase at that frequency for each of the two signals. An FFT will also allow you to window the data, which may improve phase estimation accuracy if the frequency isn't exactly bin centered (or exactly the Goertzel filter frequency). Different windows may improve the phase estimation accuracy for frequencies "between bins". A Blackman-Nutall window will be better than a rectangular window, but there may be better window choices.
The phase measurement accuracy will depend on the S/N ratio, the length of time one samples the two (assumed stationary) signals, and possibly the window used.
If you have a Phase Locked Loop (PLL) that tracks each input, then you can subtract the phase coefficients (of the generator components) to determine offset between the phases. This would also be robust against noise.

Recover the original analog signal (time varying Voltage) from digitized version?

I have been looking into how to convert my digital data into analog.
So, I have a two column ASCII data file (x: time, y=voltage amplitude) which I would like to convert into an analog signal (varying Voltage with time). There are Digital to Analog converters, but the good ones are quite expensive. There should be a more trivial way to achieve this.
Ultimately what I'd like to do is to reconstruct the original time variant voltage which was sampled every nano-second and recorded as an ASCII data file.
I thought I may feed the data into my laptop's sound card and re-generate the time variant voltage which I can then feed into the analyzer via the audio jack. Does this sound feasible?
I am not looking into recovering the "shape" but the signal (voltage) itself.
Puzzled on several accounts.
You want to convert into an analog signal (varying Voltage with time) But the what you already have, the discrete signal, is indeed a "varying voltage with time", only that both the values (voltages) and times are discrete. That's the way computers (digital equipment, in general) work.
Only when the signal goes to some non-discrete medium (eg. a classical audio cable+plug) we have an analog signal. Precisely, the sound card of your computer is at its core a "Digital to Analog converter".
So, it appears you are not trying to do some digital processing of your signal (interpolation, or whatever), you are not dealing with computer programming, but with a hardware thing: getting the signal to a cable. If so, SO is not the proper place. YOu might try https://electronics.stackexchange.com/ ...
But, on another thing, you say that your data was "sampled every nano-second". That means 1 billion samples per second, or a sample freq of 1Ghz. That's a ridiculously high frequency, at least in the audio world. You cant output that to a sound card, which would be limited to the audio range (about 48Khz = 48000 samples per second).
You want to just fit a curve to the data. Assuming the sampling rate is sufficient, a third-order polynomial would be plenty. At each point N, you fit a cubic polynomial to points N-1, N, N+1, and N+2, and then you have an analytic expression for the data values between those points. Shift over one, and repeat. You can average the values for multiple successive curves, if you want.

Resources