GNURadio Companion OQPSK demodulation and CC/RS decoding - signal-processing

I have an s-band radio that passes data through a reed-solomon encoder (255,223) , then an NRZ-M PCM encoder, then a convolutional encoder (octal 171, 133 connection vectors), and finally through an OQPSK modulator.
I have a file with raw received data and I am trying to demodulate and decode the signal to recover the original data. Currently I'm using an MPSK receiver (deprecated, but don't know what to use otherwise) and PSK demodulator, then feeding that data through "Decode CCSDS 27" (convolutional decoder), m (a custom block I wrote to decode NRZ-M), and finally sending that bitstream to the CCSDS Decoder, which is being used to do the reed-solomon decoding. However, I'm not getting any of my data out of the CCSDS Decoder. I'm a bit of amateur in this field, so I'm sure there is much to be improved upon.
Here's a screenshot of my flowgraph
Any help is greatly appreciated!

Related

Which method in lua could I use to most effectively get the hertz frequency of a certain position in a .wav file?

So I want to be able to convert a .wav file to a json table using lua, which would probably include something like {time="0:39.34",hz=440} or something. I already have all my json libraries but I just need a method to be able to convert a .wav file into something that I could use to convert it into json. If there's already a library that can do this then I need the source code of the library to be able to implement it into my code for a single-file program.
At each point in wav you'll have a full spectrum, not just "the hertz frequency". You'll have to perform a fourier transform on the data, and from many peaks in spectrum select the one you're interested in - be it fundamental or dominant, etc.
There are libs for Fast Fourier Transform out there, like LuaFFT, but you'd better get more clear picture of what you really need from the WAV. If you're just trying to read DTMF signal, you don't really need the full scale spectrum analysis.

How to convert hexadecimal data (stored in a string variable) to an integer value

Edit (abstract)
I tried to interpret Char/String data as Byte, 4 bytes at a time. This was because I could only get TComport/TDatapacket to interpret streamed data as String, not as any other data type. I still don't know how to get the Read method and OnRxBuf event handler to work with TComport.
Problem Summary
I'm trying to get data from a mass spectrometer (MS) using some Delphi code. The instrument is connected with a serial cable and follows the RS232 protocol. I am able to send commands and process the text-based outputs from the MS without problems, but I am having trouble with interpreting the data buffer.
Background
From the user manual of this instrument:
"With the exception of the ion current values, the output of the RGA are ASCII character strings terminated by a linefeed + carriage return terminator. Ion signals are represented as integers in units of 10^-16 Amps, and transmitted directly in hex format (four byte integers, 2's complement format, Least Significant Byte first) for maximum data throughput."
I'm not sure whether (1) hex data can be stored properly in a string variable. I'm also not sure how to (2) implement 2's complement in Delphi and (3) the Least Significant Byte first.
Following #David Heffernan 's advice, I went and revised my data types. Attempting to harvest binary data from characters doesn't work, because not all values from 0-255 can be properly represented. You lose data along the way, basically. Especially it your data is represented 4 bytes at a time.
The solution for me was to use the Async Professional component instead of Denjan's Comport lib. It handles datastreams better and has a built-in log that I could use to figure out how to interpret streamed resposes from the instrument. It's also better documented. So, if you're new to serial communications (like I am), rather give that a go.

Importing MNIST dataset with Fortran

A Linux/GFortran question.
I know exactly what my problem is but I can't figure out how to solve it...
I want to import the MNIST dataset images and labels into Fortran arrays to play around with Machine Learning algorithms using Fortran. I've done this with Python but I can't replicate reading the data files with Fortran.
The dataset files and file layout descriptions are at:
http://yann.lecun.com/exdb/mnist/
The 2 problems I'm struggling with are...
1) The data in the files is stored in unsigned bytes. I can't find a similar datatype in Fortran. I'm using integer(kind=1) to read the first 4 bytes successfully, which constitutes the file magic number, but I'm worried about incorrectly reading the value of one of these bytes into the signed integer(kind=1) datatype.
2) The data is stored in Big-Endian format. So when I read the number of images, rows and columns, which are stored in 4 byte integers, into my Little-Endian machine, I receive the obvious gobbledegook. Ideally, what I would like to be able to do is specify the Endiness of a variable to read from a file in an edit descriptor. Is this possible?
Any assistance would be much appreciated.
Kind regards

how to decode h.264 slice mode stream with type-9 NALs on iOS

The input raw bitstream has type 9 NALs (AU delimiter).
There are multiple NALs of type 5 or 1 between NAL type 9. for ex: {NAL-9, NAL-5, NAL-5, NAL-9, NAL-1, NAL-1, NAL-9, NAL-1, NAL-1 ....}. I believe the encoder used slice mode (raw stream is from windows h.264 encoder which has slice mode enabled via low latency flag).
How do i feed such stream to iOS?
My guess: put all NALs between two NAL-9 into one buffer, removed NAL-9, and send those buffers to HW decoder.
In my example, buffers going to decoder will look like : {NAL-5, NAL-5}, {NAL-1, NAL-1}, {NAL-1, NAL-1}. The output result shows that decode happens but with lot of green artifacts which makes me think that there is something missing in the above approach.

Writing a lexer for chunked data

I have an embedded application which communicates with a RESTful server over HTTP. Some services involve sending some data to the client which is interpreted using a very simple lexer I wrote using flex.
Now I'm in the process of adding a gzip compression layer to reduce bandwidth consumption but I'm not satisfied with the current architecture because of the memory requirements: first I receive the whole data in a buffer, then I decompress the whole buffer into a new buffer and then I feed the whole data to flex.
I can save some memory between the first and second steps by feeding chunked data from the HTTP client to the zlib routines. But I'm wondering whether it's possible to do the same between the zlib chunked output and the flex input.
Currently I use only yy_scan_bytes and yylex to analyze the input. Does flex have any feature to feed multiple chunks of data to yylex? I've read the documentation about multiple input buffers but to no avail.
YY_INPUT seems to be the correct answer:
The nature of how [the scanner] gets its input can be controlled by defining the
YY_INPUT macro. The calling sequence for YY_INPUT() is
YY_INPUT(buf,result,max_size). Its action is to place up to max_size
characters in the character array buf and return in the integer
variable result either the number of characters read or the constant
YY_NULL (0 on Unix systems) to indicate `EOF'.

Resources