GNU radio DQPSK bit error rate - signal-processing

Almost a month ago I started working on a digital communications project
which involves GNUradio.
And I am severely struggling to get past some errors or mismatches I am
encountering in GNUradio.
I am desperately in need of some expert help.
I made a DQPSK modulator and demodulator using just GNURADIO companion.(screenshots provided)
Gave a vector source with bits 0,1,0,1 and repeat on, on the input of
PSK modulator.
I also used an error rate block to calculate bit error rate.
(Vector source is on ref of error rate block and DQPSK demodulator output is on input).
I have connected wx gui scope to error rate block and constellation sink to PSK modulator.
Now almost everything that is appearing on the scopes is completely
wrong.
The bit error rate is 0.5, provided that I have added no noise (which is
max considering that we will recover 50 percent bits correctly just by
chance).
The scope connected at the PSK modulator output shows four constellation points
Even though I am transmitting only one symbol i.e (0,1).
What am I doing wrong?
Can someone please be kind enough to go through the screenshot and tell
me the mistake(s).

As Timothée Cocault said in his answer to your mail on the gnuradio-discuss#gnu.org mailing list:
Hi Haaris,
The documentation of the PSK Mod says : "The input is a byte stream
(unsigned char), treated as a series of packed symbols. Symbols are
grouped from MSB to LSB." You should add an "Unpacked to Packed block"
with 2 bits per chunk and MSB endianness before. Likewise, you should
add a "Pack K bits" block with K=2 after the PSK Demod.
Also, your assumption that you should have one point in the
constellation sink is wrong. You're using DQPSK so the (0, 1) symbol
will add 90 degrees to the phase, and you will cycle through the 4
points of your constellation.
And last, keep in mind that each block has a delay, and you can't
compare the input and output bits directly. Try to use a "Scope plot"
with 2 inputs, and add a delay block before the input bits to
synchronise the two.
Timothée.

Related

Sending Float/Double Values Across CANbus

I am preparing a code that sends various electrical parameters across a CANbus (power, voltage, current, etc.). Precision is necessary for this circumstance.
There are two options I see:
Send the value as an integer, but scale it on sending and receiver side (Sending x100, Receiving /100, for example). If I do this, then I can send 2.12V as 212 (or 0xD4).
Send it as a float value, which would require 32 bits but no scaling.
My questions are as follows:
Can float values be sent across CANbus?
If yes to Q1, is that a common practice? Or do most CANbus communication programmers use scaled integers?
Thank you in advance.
First of all, please read this answer since it answers all the floating point concerns:
Does the "Avoid using floating-point" rule of thumb apply to a microcontroller with a floating point unit (FPU)?
Can float values be sent across CANbus?
Yes, though like any large number they depend on endianess, so you will have to specify a network endianess for floating point on your CAN application layer. Sending 4 or 8 bytes in a single package is obviously not a problem.
If yes to Q1, is that a common practice? Or do most CANbus communication programmers use scaled integers?
It is not common practice, unscaled raw data is by far the most common and convenient. For example if you wish to express a voltage between 0 and 5V, you could for example send a raw data value between 0 and 65535 then scale it to mean Volt if you actually need to use that unit at some point (for example when showing voltage for humans on a display).
The raw data form also has the advantage of being technology forwards/backwards compatible. Suppose you read voltage with a 10 bit ADC. Your values will be in the range of 0 to 1023. You can express that as 16 bit raw data by multiplying with (65536/1024) = 64. Some years later you switch to a 12 bit ADC with values up to 4095. You can then keep the very same CAN protocol and format, just with increased resolution smaller "steps" of raw data.

Data recovery of QFSK signal in GNURadio

I'm pretty new to using GNURadio and I'm having trouble recovering the data from a signal that I've saved into a file. The signal is a carrier frequency of 56KHz with a frequency shift key of +/- 200hz at 600 baud.
So far, I've been able to demodulate the signal that looks similar to the signal I get from the source:
I'm trying to get this into a repeating string of 1s and 0s (the whole telegram is 38 bytes long and it continuously repeats). I've tried to use a clock recovery block in order to have only one byte per sample, but I'm not having much luck. Using the M&M clock recovery block, the whole telegram sometimes comes out correct, but it is not consistent. I've tried to adjust the omega and Mu values, but it doesn't seem to help that much. I've also tried using the Polyphase Clock sync, but I keep getting a runtime error of 'please specify a filter'. Is this asking me to add a tap? what tap would i use?
So I guess my overall question would be: What's the best way to get the telegram out of the demodulated fsk signal?
Again, pretty new at this so please let me know if I've missed something crucial. GNU flow graph below:
You're recovering the bit timing, but you're not recovering the byte boundaries – that needs to happen "one level higher", eg. by a well-known packet format with a defined preamble that you can look for.

FSK demodulation with GNU Radio

I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.
So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:

How to keep track of the seed

So in Lua it's common knowledge that you can use math.randomseed but it's also obvious that math.random sets the seed as well (calling it twice does not return the same result), what does it set it to, and how can I keep track of it, and if it's impossible, please explain why that is so.
This is not a Lua question, but general question on how some RNG algorithm works.
First, Lua don't have their own RNG - they just output you (slightly mangled) value from RNG of underlying C library. Most RNG implementations do not reveal you their inner state, but sometimes you can caclulate it yourself.
For example when you use Lua on Windows, you'll be using LCG-based RNG from MS C library. The numbers you get is a slice of seed, not full value. There are two ways you can deal with that:
If you know how many times you called random, you can just take initial seed value, feed it to your copy of the same algorithm with same constants that are hardcoded in MS library and get exact value of seed.
If you don't, but you can be sure that nobody interferes in between your two calls to random, you can get two generated numbers, and reverse LCG algorithm by shifting bits back to their place. This will leave you with several missing bits (with one more bit thanks to Lua mangling) that you will need to simply bruteforce - just reiterate over all missing bits until your copy of algorithm produces exactly same two "random" numbers you've recorded before. That will be current seed stored inside library's RNG as well. Well programmed solution in Lua can bruteforce this in about 0.2-0.5s on somewhat dated PC - I did it past. Here's example on Crypto.SE talking about this task in more details: Predicting values from a Linear Congruential Generator.
First approach can be used with any other RNG algorithm that doesn't use any real entropy, second with most RNGs that don't mask too much bits in slice to make bruteforcing unreasonable.
Real answer though is: you don't need to keep track of seed at all. What you want is probably something else.
If you set a seed all numbers math.random() generates are pseudo-random (This is always the case as the system will generate a seed by itself).
math.randomseed(4)
print(math.random())
print(math.random())
math.randomseed(4)
print(math.random())
Outputs
0.50827539156303
0.75454387490399
0.50827539156303
So if you reset the seed to the same value you can predict all values that are going to come up to the maximum number of consecutive values that you already generated using that seed.
What the seed does not do is keep the output of math.random() the same. It would be the same if you kept resetting it to the same value.
An analogy as an example
Imagine the random number is an integer between 0 and 9 (instead of a double between 0 and 1).
math.random() could traverse pi's decimals from an arbitrary starting position (default could be system time).
What you do when you use set.seed() is (not literally, this is an analogy as mentioned) set the starting decimals of where in pi you are going to retrieve your numbers.
If you now reset the seed to the same starting position the numbers are going to be the same as the last time you reset the starting position.
You will know the numbers of to the last call, after that you can't be certain anymore.

How could I guess a checksum algorithm?

Let's assume that I have some packets with a 16-bit checksum at the end. I would like to guess which checksum algorithm is used.
For a start, from dump data I can see that one byte change in the packet's payload totally changes the checksum, so I can assume that it isn't some kind of simple XOR or sum.
Then I tried several variations of CRC16, but without much luck.
This question might be more biased towards cryptography, but I'm really interested in any easy to understand statistical tools to find out which CRC this might be. I might even turn to drawing different CRC algorithms if everything else fails.
Backgroud story: I have serial RFID protocol with some kind of checksum. I can replay messages without problem, and interpret results (without checksum check), but I can't send modified packets because device drops them on the floor.
Using existing software, I can change payload of RFID chip. However, unique serial number is immutable, so I don't have ability to check every possible combination. Allthough I could generate dumps of values incrementing by one, but not enough to make exhaustive search applicable to this problem.
dump files with data are available if question itself isn't enough :-)
Need reference documentation? A PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS is great reference which I found after asking question here.
In the end, after very helpful hint in accepted answer than it's CCITT, I
used this CRC calculator, and xored generated checksum with known checksum to get 0xffff which led me to conclusion that final xor is 0xffff instread of CCITT's 0x0000.
There are a number of variables to consider for a CRC:
Polynomial
No of bits (16 or 32)
Normal (LSB first) or Reverse (MSB first)
Initial value
How the final value is manipulated (e.g. subtracted from 0xffff), or is a constant value
Typical CRCs:
LRC: Polynomial=0x81; 8 bits; Normal; Initial=0; Final=as calculated
CRC16: Polynomial=0xa001; 16 bits; Normal; Initial=0; Final=as calculated
CCITT: Polynomial=0x1021; 16 bits; reverse; Initial=0xffff; Final=0x1d0f
Xmodem: Polynomial=0x1021; 16 bits; reverse; Initial=0; Final=0x1d0f
CRC32: Polynomial=0xebd88320; 32 bits; Normal; Initial=0xffffffff; Final=inverted value
ZIP32: Polynomial=0x04c11db7; 32 bits; Normal; Initial=0xffffffff; Final=as calculated
The first thing to do is to get some samples by changing say the last byte. This will assist you to figure out the number of bytes in the CRC.
Is this a "homemade" algorithm. In this case it may take some time. Otherwise try the standard algorithms.
Try changing either the msb or the lsb of the last byte, and see how this changes the CRC. This will give an indication of the direction.
To make it more difficult, there are implementations that manipulate the CRC so that it will not affect the communications medium (protocol).
From your comment about RFID, it implies that the CRC is communications related. Usually CRC16 is used for communications, though CCITT is also used on some systems.
On the other hand, if this is UHF RFID tagging, then there are a few CRC schemes - a 5 bit one and some 16 bit ones. These are documented in the ISO standards and the IPX data sheets.
IPX: Polynomial=0x8005; 16 bits; Reverse; Initial=0xffff; Final=as calculated
ISO 18000-6B: Polynomial=0x1021; 16 bits; Reverse; Initial=0xffff; Final=as calculated
ISO 18000-6C: Polynomial=0x1021; 16 bits; Reverse; Initial=0xffff; Final=as calculated
Data must be padded with zeroes to make a multiple of 8 bits
ISO CRC5: Polynomial=custom; 5 bits; Reverse; Initial=0x9; Final=shifted left by 3 bits
Data must be padded with zeroes to make a multiple of 8 bits
EPC class 1: Polynomial=custom 0x1021; 16 bits; Reverse; Initial=0xffff; Final=post processing of 16 zero bits
Here is your answer!!!!
Having worked through your logs, the CRC is the CCITT one. The first byte 0xd6 is excluded from the CRC.
It might not be a CRC, it might be an error correcting code like Reed-Solomon.
ECC codes are often a substantial fraction of the size of the original data they protect, depending on the error rate they want to handle. If the size of the messages is more than about 16 bytes, 2 bytes of ECC wouldn't be enough to be useful. So if the message is large, you're most likely correct that its some sort of CRC.
I'm trying to crack a similar problem here and I found a pretty neat website that will take your file and run checksums on it with 47 different algorithms and show the results. If the algorithm used to calculate your checksum is any of these algorithms, you would simply find it among the list of checksums produced with a simple text search.
The website is https://defuse.ca/checksums.htm
You would have to try every possible checksum algorithm and see which one generates the same result. However, there is no guarantee to what content was included in the checksum. For example, some algorithms skip white spaces, which lead to different results.
I really don't see why would somebody want to know that though.

Resources