FSK demodulation with GNU Radio - signal-processing

I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.

So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:

Related

Sound Wave Peak Detection (Cricket Chirping)

I am looking for a way to code or find a program that can record the chirps of crickets either live or through a prerecorded audio file (large ~24 hours) for a lab experiment.
I'm not too sure how to approach this as I'm a web developer, but I have experience with JS and python, along with libraries. My initial idea was to use Matplotlib to produce an audio visualizer, and then count each time a certain range of db is reached which matches the db of a cricket chirp, but I have no idea how to approach it.
I have successfully visualized the chirps on a online spectrum analyzer (Spectrum Visualizer of Audio Chirps), and can see it clearly, however I don't know how I can use code to count each "chirp" and record it along with the date and time for each chirp in a table of values / dataset of some sort.
Any guidance or help would be greatly appreciated!
Really naive solution: For every unit of time (column of pixels in the spectrogram), you can calculate the sum of all the values in a column. For example, if all the pixels in a column are black, the sum for that column will be 0, if some of them are colored, the sum will be >0.
Then loop over the sums: if at a certain step you go from 0 to > 0, and then after a while you go back to 0, you've hit a peak.
If the output is too "noisy", you can use a small threshold value instead of 0. Tweak it until results seem OK. This works well when the input background noise is pretty much the same all the way through, but if it's skewed up and down during the whole recording, you need something more complicated (for example, the threshold constantly changes with the average of the last n sums)
Or you can Google "peak detection algorithm" and implement one of them. Or you can Google "peak detection library" in your favourite language and use one of them.

Data recovery of QFSK signal in GNURadio

I'm pretty new to using GNURadio and I'm having trouble recovering the data from a signal that I've saved into a file. The signal is a carrier frequency of 56KHz with a frequency shift key of +/- 200hz at 600 baud.
So far, I've been able to demodulate the signal that looks similar to the signal I get from the source:
I'm trying to get this into a repeating string of 1s and 0s (the whole telegram is 38 bytes long and it continuously repeats). I've tried to use a clock recovery block in order to have only one byte per sample, but I'm not having much luck. Using the M&M clock recovery block, the whole telegram sometimes comes out correct, but it is not consistent. I've tried to adjust the omega and Mu values, but it doesn't seem to help that much. I've also tried using the Polyphase Clock sync, but I keep getting a runtime error of 'please specify a filter'. Is this asking me to add a tap? what tap would i use?
So I guess my overall question would be: What's the best way to get the telegram out of the demodulated fsk signal?
Again, pretty new at this so please let me know if I've missed something crucial. GNU flow graph below:
You're recovering the bit timing, but you're not recovering the byte boundaries – that needs to happen "one level higher", eg. by a well-known packet format with a defined preamble that you can look for.

Beaglebone Black sampling rate too slow and gives false voltage libpruio

I'm pretty much a noob when it comes to this kind of thing, so if you guys could either help me or direct me to a place to learn what I need to know, I would greatly appreciate it.
Basically my problem is that I am using the libpruio library to continuously sample analog values from the board. 2 things are going wrong here.
The first is that whenever the BB is sampling the voltages, the voltage of the wire that is hooked up to the AIN pin goes up. I've observed this through hooking up an oscilloscope to the same wire the pin is sampling. What I see is that whenever the BB starts sampling, the entire signal (just a sound wave from an amplified mic) is shifted up .8-.9 volts. This is also reflected in the values that I get from the BB, which are around 30,000 (when they should be 0). Hooking the pin up to ground gets me 0, which is correct, and hooking it up to 1.8 volts gets me something like 65520, which is also correct. So maybe it has something to do with the signal being weak?
The second issue is that even though I am receiving values at a rate of like 500khz-900khz, the actual rate seems to be around 11khz. What I mean by this is I only get a new value every 88us, and the rest of the values I get are stay the same as the new value until the next 88us passes, when I get a new value. These times correspond to the voltage shift up, which I mentioned in the previous paragraph. So actually what I see on the oscilloscope is that whenever I sample with the BB, there is a saw wave, with the frequency at the 11khz I was mentioning earlier.
In conclusion, whenever the BB samples, it first increases the voltage at the pin by .9volts, takes a sample of that voltage, and the voltage dies down for the next 88us, all the while the BB spits back the sample it took at the beginning of the period. I do not want this. I want it to not affect the voltage significantly, and take new samples every time the code runs.
As for the code I'm using, it's basically a slightly modified version of the IO_Input example in the libpruio library, with the values being stored in an array for later use instead of being printed immediately.
If you guys need any more information, I will gladly post it here, but for now I'm wondering if it is something super obvious that I'm missing.
Hooking the pin up to ground gets me 0, which is correct, and hooking
it up to 1.8 volts gets me something like 65520, which is also
correct. So maybe it has something to do with the signal being weak?
The BBB and libpruio seem to work OK. Check your wiring.
Regarding the sampling rate, the io_input example uses IO mode. If you need accurate timing for the samples use MM mode or RB mode.
Your target isn't very clear, so I cannot give detailed advices. (Some code also would help to understand what you're trying to do.)
BR

Get peak volume of audio input on iOS

On iOS 7, how do I get the current microphone input volume in a range between 0 and 1?
I've seen several approaches like this one, but the results I get baffle me.
The return values of peakPowerForChannel: are documented to be in the range of -160 to 0 with 0 being the loudest and -160 near absolute silence.
Problem: Given a quite room and a short but loud noise, the power goes all the way up in an instant but takes very long time to drop back to quite level (way longer than the actual noise...)
What I want: Essentially I want an exact copy of the Audio Input patch of Quartz Composer with its Volume Peak output. Any tips?
To get a similar volume peak measurement, you might have to input raw audio via the iOS Audio Queue API (or the RemoteIO Audio Unit), and analyze the raw PCM waveform samples in each audio callback, looking for a magnitude maxima over your desired frame width or analysis time.

How to Recognize a Pattern in a Stream of Numbers

I have a stream of numbers (integers for the sake of discussion) being sampled off an analog input (a a/d converter attached to a potentiomeger). I am curious how I would recognize a pattern in the numbers in realtime.
That is to say, if someone quickly twiddles the pot all the way up and back down, how do I recognize that, vs if turn it only half way. Or what if they turn it up and down three times in a row. How can I convert these actions into distinct "events"? This seems especially tricky to me since the time window over which each of these events will occur will be modestly variable.
I can think of a few quick, hacky ways to do this, but nothing that I am confident in. I am also curious how one would expand this out to multiple different inputs (i.e. input off a spectrograph). Does that change things dramatically? I am not even sure what topic area I should be googling.
If you know what you are looking for, correlate the input signal against a replica of what you expect. Basically, implement a matched filter. If you want to see when the input stream is -127, -63, 0, 63, 127, implement a direct form fir filter with these values as the coefficients. Then look for a maximum on the output. The maximum output of a filter with those coefficients occurs when the data in the filter is -127, -63, 0, 63, 127.
Google "Matched Filter Detection" or or "detection theory" maybe even "Feature detection"
If you don't know exactly what you are looking for, or what you are looking for is variable, it gets more complicated. You would then try to implement a filter that's output would give you information about what is going on. The example that I gave above would show the output spike up when that input sequence occurred. If you then saw that spike occurring with regular frequency, you would guess that the input event was occurring with regular frequency.
if you made your filter 0, 63, 127 63 0, which correlates to turning the knob all the way up, and then back down again, and on your output saw the aforementioned spike occur, but having a lower maximum amplitude and wider time over which the correlation occurs, that might tell you that the know was turned all the way up and then back down, but either slower or faster than the speed for which the filter is design to get a maximum response.
To combat this you might implement 3 of these filters in parallel, one designed for a slow knob turn, one for a medium speed knob turn, and one for a fast knob turn. Then looking at the 3 outputs you get 3 different correlations which better help you understand what is occurring
Did you consider taking the running difference of the signal (its differentiation)?

Resources