How does a gnuradio source block know how many samples to output? - signal-processing

I'm trying to understand how gnuradio source blocks work. I know how to make a simple one that outputs a constant and I understand what sample rate means, but I'm not sure how (or where) to combine the two.
Is the source block in charge of regulating the amount of data to output? Or does the amount that it outputs depend upon other blocks in the flow graph and how much they consume? Some source blocks take sample_rate as an input, which makes me think it's the former. But other blocks don't, which makes me think it's the later.
If a source block is in charge of its sample rate, how does it regulate it? Do they check the system clock and output samples based upon that?

Do they check the system clock and output samples based upon that?
Definitely not. All GNU Radio blocks operate at the maximum speed the processor can give.
However, GNU Radio relies on the fact that each flowgraph may have a source and/or sink device (e.g USRP, other SDR device, sound card) that produces/consumes samples in a constant rate. Consequently, the flowgraph is throttled at the rate of the hardware.
In order to avoid CPU saturation, if none of these hardware devices exist, GNU Radio provides the Throttle block that tries (it is not so accurate) to throttle the samples per second at the given rate, by sleeping for suitable amount of time between each sample that passes through the Throttle block.
As far the sample_rate parameter concerns, excluding the Throttle and device specific blocks, it is used only for graphical representation or internal calculations.

Related

Reaching clock regions using BUFIO and BUFG

I need to realize a source-synchronous receiver in a Virtex 6 that receives data and a clock from a high speed ADC.
For the SERDES Module I need two clocks, that are basically the incoming clock, buffered by BUFIO and BUFR (recommended). I hope my picture makes the situation clear.
Clock distribution
My problem is, that I have some IOBs, that cannot be reached by the BUFIO because they are in a different, not adjacent clock region.
A friend recommended using the MMCM and connecting the output to a BUFG, which can reach all IOBs.
Is this a good idea? Can't I connect my LVDS clock buffer directly to a BUFG, without using the MMCM before?
My knowledge about FPGA Architecture and clocking regions is still very limited, so it would be nice if anybody has some good ideas, wise words or has maybe worked out a solution to a similar problem in the past.
It is quite common to use a MMCM for external inputs, if only to cleanup the signal and realize some other nice features (like 90/180/270 degree phase shift for quad-data rate sampling).
With the 7-series they introduced the multi-region clock buffer (BUFMR) that might help you here. Xilinx has published a nice answer record on which clock buffer to use when: 7 Series FPGA Design Assistant - Details on using different clocking buffers
I think your friends suggestion is correct.
Also check this application note for some suggestions: LVDS Source Synchronous 7:1
Serialization and Deserialization Using
Clock Multiplication

IR receiver powered by AndroidThings

Is it possible to implement IR receiver on android-things?
1st idea:
Use GPIO as input and try to buffer changes and then parse the buffer to decode a message.
findings:
GPIO listener mechanism is too slow to observe IR signal.
Another way is to read GPIO infinite loop. But all IR protocols strongly depend on time and java(dalvik) in this case is to less accurate.
2nd idea
Use UART
findings:
It seems to be possible to adjust baud rate to observe all bits of a message but UART API require to setup quantity of start bits etc. and this is a problem because IR protocols do not fit that schema.
IMHO at the moment, UART is the only path but it would be a huge workaround.
The overarching problem (as you've discovered) is that any non-realtime system will have difficulty parsing this input directly because of the timing constraints. This is a job best suited to a microcontroller where you can access a timer interrupt. Grab an inexpensive tinyAVR or PIC to manage the sensor for you.
You will also want to use a dedicated receiver sensor (you might already be doing this) to simplify parsing the signal. These sensors include a demodulator, which means you don't have to deal with 38kHz pulse signal and the input is converted into a more standard PWM wave.
I believe you can't process the IR signal in Java because the reading pulses would be quicker than the reading resolution-at least in a raspberry pi. To get faster gpio readings I'm confident you can do in c++ with ndk with the raspberry. Though it's not officially supported there's some tricks to enable it. See How to do GPIO on Android Things bypassing Java on how to write to gpio in c. From there it should be trivial to read in a tight loop. Though I would still try to hook the trigger from Java since so far I have no clear easy idea on how to write/install interrupts in c.

Time between callback calls?

I have a lab project that uses mainly PyAudio and to further understand its way of working I made some measurements, in this case time between callbacks (using callback mode).
I timed it, and got an interesting result
(#256 chunk size, 44.1k fs): 0.0099701;0.0000365;0.0000201;0.0201579
This pattern goes on and on.
Between two longer calls, we have two shorter calls and sometimes the longer call is shorter (mind you I don't do anything else in the program than time the callbacks).
If we average this out we get our desired callback time:
1/44100 * 256 (roughly 5.8ms)
Here is my measurement visualized:
So can someone explain what exactly happens here under the hood?
What happens under the hood in PortAudio is dependent on a number of factors, including:
Which native audio API PortAudio is talking to
What buffer size and latency parameters you passed to Pa_OpenStream()
The capabilities of the audio hardware and its drivers, including its supported buffer sizes, buffering model and timing characteristics.
Under some circumstances PortAudio will request larger buffers from the native audio API and then invoke the PortAudio user callback multiple times in quick succession. This can happen if you have selected a small callback buffer size and a long latency.
Another scenario is that the native audio API doesn't support the buffer size that you requested for your callback size (framesPerBuffer parameter to Pa_OpenStream()). In this case PortAudio will be forced to use a driver-supported buffer size and then "adapt" between that buffer size and your callback buffer size. This adaption process can cause irregular timing.
Yet another possibility is that the native audio API uses a large ring buffer. Each time PortAudio polls the native host API, it will work to fill the native ring buffer by calling your callback as many times as needed. In this case irregular timing is related to the polling rate.
The above are not the only possibilities.
One likely explanation of what is happening in your case is that PortAudio is calling your callback 3 times in fast succession (a guess would be that the native buffer size is 3x your callback buffer size), for one of the reasons above.
Another possibility is that the native audio subsystem is signalling PortAudio irregularly. This can happen if a system layer below PortAudio is doing similar kinds of buffering to what I described above. I have seen this happen with DirectSound on Windows 7 for example. ASIO4ALL drivers will exhibit +/- 1ms jitter (which is not what you're seeing).
You can try reducing the requested stream latency to 0 and see if that changes the result. This will force double-buffering, which may or may not produce stable output. Another thing to try is to use the paFramesPerBufferUnspecified parameter, which will cause the callback to be called with the native buffer size -- then you can observe whether there is greater periodicity, what that buffer size is, and also whether the buffer size varies from callback to callback.
You didn't say which operating system and host API you're targetting, so it's hard to give more specific details than the above.
The internal buffering models used by the various PortAudio host API backends are described in some detail on the PortAudio wiki.
To answer a related question: why is it like this? Aside from the cases where it is a function of the lower layers of the native audio subsystem, or the buffer adaption process, it is often a result of specifying a large suggested latency to Pa_OpenStream(). Some PortAudio host APIs will relax the buffer periodicity if the specified latency is very high, in order to reduce system load that would be caused by high-frequency timer callbacks.

What are asynchronous circuits?

there are combination and sequential circuits.In sequential circuits there is memory element used. is asynchronous circuit also used flip flop like memory element in circuit. and how they are unstable which make it poor choice for circuit. how can be explain the instability in asynchronous circuit?
Probably you are referring to a system that is clock driven. The data is latched at the certain part of the clock period (normally rising/falling edge). If you are using an asynchronous circuit, it can change its state anytime, including the data latch time. This leads to instability.

NSThread, NSOperation or GCD for CoreMotion and accurate timing purposes?

I'm looking to do some high precision core motion reading (>=100Hz if possible) and motion analysis on the iPhone 4+ which will run continuously for the duration of the main part of the app. It's imperative that the motion response and the signals that the analysis code sends out are as free from lag as possible.
My original plan was to launch a dedicated NSThread based on the code in the metronome project as referenced here: Accurate timing in iOS, along with a protocol for motion analysers to link in and use the thread. I'm wondering whether GCD or NSOperation queues might be better?
My impression after copious reading is that they are designed to handle a quantity of discrete, one-off operations rather than a small number of operations performed over and over again on a regular interval and that using them every millisecond or so might inadvertently create a lot of thread creation/destruction overhead. Does anyone have any experience here?
I'm also wondering about the performance implications of an endless while loop in a thread (such as in the code in the above link). Does anyone know more about how things work under the hood with threads? I know that iPhone4 (and under) are single core processors and use some sort of intelligent multitasking (pre-emptive?) which switches threads based on various timing and I/O demands to create the effect of parallelism...
If you have a thread that has a simple "while" loop running endlessly but only doing any additional work every millisecond or so, does the processor's switching algorithm consider the endless loop a "high demand" on resources thus hogging them from other threads or will it be smart enough to allocate resources more heavily towards other threads in the "downtime" between additional code execution?
Thanks in advance for the help and expertise...
IMO the bottleneck are rather the sensors. The actual update frequency is most often not equal to what you have specified. See update frequency set for deviceMotionUpdateInterval it's the actual frequency? and Actual frequency of device motion updates lower than expected, but scales up with setting
Some time ago I made a couple of measurements using Core Motion and the raw sensor data as well. I needed a high update rate too because I was doing a Simpson integration and thus wnated to minimise errors. It turned out that the real frequency is always lower and that there is limit at about 80 Hz. It was an iPhone 4 running iOS 4. But as long as you don't need this for scientific purposes in most cases 60-70 Hz should fit your needs anyway.

Resources