FreeRTOS ISR at 50 KHz - task

I want to blink a led (toggle a GPIO pin) synchronously with an hardware timer configured to rise an interrupt at 50KHz on an ARM Cortex M4.
In my current code, I toggle one GPIO pin into one specific ISR handler triggered on a 50KHz external clock signal. The result is that the GPIO pin is toggled very erratically at random frequencies from 1KHz to 96KHz.
The Operating System is not running any other task apart from the Timer Tick Interrupt (100Hz with the lowest priority) the IDLE Task and my specific ISR handler.
Otherwise, this "toggling solution" is working perfectly with a Bare Metal implementation on the same MCU. So, my problem seems to come from my lack of knowledge in the FreeRTOS environment.
Is it feasible to toggle a LED into a FreeRTOS ISR at 50 KHz ?
Do I need to do it into a task waiting for the 50KHz interrupt signal
?
Should I create a timed toggling task at 50KHz and synchronize it
periodically with the external clock signal ?

If you want the 50K to be accurate (no jitter) the priority of the interrupt would have to be at or above configMAX_SYSCALL_INTERRUPT_PRIORITY: https://www.freertos.org/a00110.html#kernel_priority and https://www.freertos.org/RTOS-Cortex-M3-M4.html (assuming you are using a port that supports interrupt nesting).

I finally find out that the instruction cache was not enabled in the FreeRTOS implementation of my 50KHz ISR unlike in the Bare Metal version. It's now almost working as expected (some jitter remains).
In reply to my questions I suggest the following :
Is it feasible to toggle a LED into a FreeRTOS ISR at 50 KHz ?
Definitely yes. FreeRTOS can perform ISR at any frequencies. The feasibility only depends on the computational capablities of the MCU added to the instruction and data access performances. For sure FreeRTOS will add some delays compared to a Bare Metal implementation to process an ISR (ISR overhead and port efficiency) but once again, this will be more or less significant depending of the MCU's computational performances and the considerate frequency.
Do I need to do it into a task waiting for the 50KHz interrupt signal ?
Not necessarily. Anyway, this is a good alternative if an irregular or a rather important processing is needed. However the FreeRTOS deferring instructions will cost time and may be too much to end the processing before the next ISR. More resilient but less efficient. In my case at 50KHz, this cost too much time regarding the average amount of processing per ISR.
Should I create a timed toggling task at 50KHz and synchronize it periodically with the external clock signal ?
Not feasible unless reduce the system tick frequency significantly which will cause a severe loss of performance.

You could not get 50 KHz by FreeRTOS mechanisms.
Оf course, you can try but this bad idea. So you should change system tick at least 10uS(1 / 50KHz / 2 ) in this case your task will have small latency(it allows react "immediately"), but context switch ISR will be called very often and this reduces performance.
The right way is using hardware timer to generate the interrupt (for toggle GPIO) or use timer with PWM output in this case frequency accuracy depends on clock source.
In order to synchronize timer from external sources, you should use the external interrupt and additional timer with high accuracy (at least 10uS)

Related

How long is a "tick" in FreeRTOS?

For the functions xTaskGetTickCount() and xTaskGetTickCountFromISR(), the FreeRTOS documentation doesn't give any indication of what a "tick" is, or how long it is, or any links to where to find out.
Returns:
The count of ticks since vTaskStartScheduler was called.
What is a "tick" in FreeRTOS? How long is it?
First found the answer in an archived thread at FreeRTOS forums:
The tick frequency is set by configTICK_RATE_HZ in FreeRTOSConfig.h. FreeRTOSConfig.h settings are described here:
http://www.freertos.org/a00110.html
If you set configTICK_RATE_HZ to 1000 (1KHz), then a tick is 1ms (one one thousandth of a second). If you set configTICK_RATE_HZ to 100 (100Hz), then a tick is 10ms (one one hundredth of a second). Etc.
And from the linked FreeRTOS doc:
configTICK_RATE_HZ
The frequency of the RTOS tick interrupt.
The tick interrupt is used to measure time. Therefore a higher tick frequency means time can be measured to a higher resolution. However, a high tick frequency also means that the RTOS kernel will use more CPU time so be less efficient. The RTOS demo applications all use a tick rate of 1000Hz. This is used to test the RTOS kernel and is higher than would normally be required.
More than one task can share the same priority. The RTOS scheduler will share processor time between tasks of the same priority by switching between the tasks during each RTOS tick. A high tick rate frequency will therefore also have the effect of reducing the 'time slice' given to each task.

How does a gnuradio source block know how many samples to output?

I'm trying to understand how gnuradio source blocks work. I know how to make a simple one that outputs a constant and I understand what sample rate means, but I'm not sure how (or where) to combine the two.
Is the source block in charge of regulating the amount of data to output? Or does the amount that it outputs depend upon other blocks in the flow graph and how much they consume? Some source blocks take sample_rate as an input, which makes me think it's the former. But other blocks don't, which makes me think it's the later.
If a source block is in charge of its sample rate, how does it regulate it? Do they check the system clock and output samples based upon that?
Do they check the system clock and output samples based upon that?
Definitely not. All GNU Radio blocks operate at the maximum speed the processor can give.
However, GNU Radio relies on the fact that each flowgraph may have a source and/or sink device (e.g USRP, other SDR device, sound card) that produces/consumes samples in a constant rate. Consequently, the flowgraph is throttled at the rate of the hardware.
In order to avoid CPU saturation, if none of these hardware devices exist, GNU Radio provides the Throttle block that tries (it is not so accurate) to throttle the samples per second at the given rate, by sleeping for suitable amount of time between each sample that passes through the Throttle block.
As far the sample_rate parameter concerns, excluding the Throttle and device specific blocks, it is used only for graphical representation or internal calculations.

Microcontroller communication tasks in background

I'm using an ARM Cortex M4 and I want to ask if it's possible to unload main routine form communication tasks and let them run in background.
For example I'm using on ARM MCU this peripherals:
ADC
I2C
UART
SPI
When adc_start(ADC); is called, ADC start conversion in background so I don't need to wait until ADC has finished conversion and I can go to the next istruction and later read the ADC result.
I want to ask if it's possible to do the same with communication periphericals. I2C and SPI can be fast, but since this MCU types can reach 50Mhz and more, it's a waste of MCU speed if I need to wait until I2C have finished to trasmit at 400kHz or SPI at 20Mhz or worst with UART. Also, if I perform some tasks and I don't want to interrupt them, I need to be able to unload MCU from any interrupts from peripherals and let them recive packets, buffer them and when I need to read them.
Something like this is possible?
If I've understood the question correctly, you're looking for automatic interrupt based handling of fast communication peripherals such as the I2C and SPI. As far as I know, YES! its achievable, at least on the Texas Instruments TIVA based ARM CORTEX M4 series MCUs. It's quite a nifty little feature to have around when you're working on computationally intensive algorithms and not have the CPU bogged down on waiting for the SPI to finish its task.
For a good reference on programming the CORTEX M4 peripherals, I recommend keeping this book handy:
http://www.amazon.com/TI-ARM-Peripherals-Programming-Interfacing-ebook/dp/B00L9DRAI2
Table 6-7 in chapter 6 of the book details the interrupt vector table on the TM4C123G MCU (the one shipped with the TIVA launchpad). Interrupts 50 and 53 are assignments for the SSI/SPI and I2C peripherals respectively. Process should be fairly straight forward once you unmask the right interrupts.

What are asynchronous circuits?

there are combination and sequential circuits.In sequential circuits there is memory element used. is asynchronous circuit also used flip flop like memory element in circuit. and how they are unstable which make it poor choice for circuit. how can be explain the instability in asynchronous circuit?
Probably you are referring to a system that is clock driven. The data is latched at the certain part of the clock period (normally rising/falling edge). If you are using an asynchronous circuit, it can change its state anytime, including the data latch time. This leads to instability.

NSThread, NSOperation or GCD for CoreMotion and accurate timing purposes?

I'm looking to do some high precision core motion reading (>=100Hz if possible) and motion analysis on the iPhone 4+ which will run continuously for the duration of the main part of the app. It's imperative that the motion response and the signals that the analysis code sends out are as free from lag as possible.
My original plan was to launch a dedicated NSThread based on the code in the metronome project as referenced here: Accurate timing in iOS, along with a protocol for motion analysers to link in and use the thread. I'm wondering whether GCD or NSOperation queues might be better?
My impression after copious reading is that they are designed to handle a quantity of discrete, one-off operations rather than a small number of operations performed over and over again on a regular interval and that using them every millisecond or so might inadvertently create a lot of thread creation/destruction overhead. Does anyone have any experience here?
I'm also wondering about the performance implications of an endless while loop in a thread (such as in the code in the above link). Does anyone know more about how things work under the hood with threads? I know that iPhone4 (and under) are single core processors and use some sort of intelligent multitasking (pre-emptive?) which switches threads based on various timing and I/O demands to create the effect of parallelism...
If you have a thread that has a simple "while" loop running endlessly but only doing any additional work every millisecond or so, does the processor's switching algorithm consider the endless loop a "high demand" on resources thus hogging them from other threads or will it be smart enough to allocate resources more heavily towards other threads in the "downtime" between additional code execution?
Thanks in advance for the help and expertise...
IMO the bottleneck are rather the sensors. The actual update frequency is most often not equal to what you have specified. See update frequency set for deviceMotionUpdateInterval it's the actual frequency? and Actual frequency of device motion updates lower than expected, but scales up with setting
Some time ago I made a couple of measurements using Core Motion and the raw sensor data as well. I needed a high update rate too because I was doing a Simpson integration and thus wnated to minimise errors. It turned out that the real frequency is always lower and that there is limit at about 80 Hz. It was an iPhone 4 running iOS 4. But as long as you don't need this for scientific purposes in most cases 60-70 Hz should fit your needs anyway.

Resources