I'm using an ARM Cortex M4 and I want to ask if it's possible to unload main routine form communication tasks and let them run in background.
For example I'm using on ARM MCU this peripherals:
ADC
I2C
UART
SPI
When adc_start(ADC); is called, ADC start conversion in background so I don't need to wait until ADC has finished conversion and I can go to the next istruction and later read the ADC result.
I want to ask if it's possible to do the same with communication periphericals. I2C and SPI can be fast, but since this MCU types can reach 50Mhz and more, it's a waste of MCU speed if I need to wait until I2C have finished to trasmit at 400kHz or SPI at 20Mhz or worst with UART. Also, if I perform some tasks and I don't want to interrupt them, I need to be able to unload MCU from any interrupts from peripherals and let them recive packets, buffer them and when I need to read them.
Something like this is possible?
If I've understood the question correctly, you're looking for automatic interrupt based handling of fast communication peripherals such as the I2C and SPI. As far as I know, YES! its achievable, at least on the Texas Instruments TIVA based ARM CORTEX M4 series MCUs. It's quite a nifty little feature to have around when you're working on computationally intensive algorithms and not have the CPU bogged down on waiting for the SPI to finish its task.
For a good reference on programming the CORTEX M4 peripherals, I recommend keeping this book handy:
http://www.amazon.com/TI-ARM-Peripherals-Programming-Interfacing-ebook/dp/B00L9DRAI2
Table 6-7 in chapter 6 of the book details the interrupt vector table on the TM4C123G MCU (the one shipped with the TIVA launchpad). Interrupts 50 and 53 are assignments for the SSI/SPI and I2C peripherals respectively. Process should be fairly straight forward once you unmask the right interrupts.
Related
While I'm not new to embedded programming, I'm new to the Atmel SAM3X microcontroller. I'm trying to figure out if it's possible to use DMA to read a value from a memory-mapped register (a GPIO port, in this case) into a buffer periodically at say 1/4 the clock rate (faster than can be accomplished by software copying or software triggering of DMA), then turn the buffer over to the USB DMA to send it out the USB cable.
I see that PWM is one of the peripherals that can perform DMAC "transmissions", and I also see that the DMA channel registers have separate spots for source address and source peripheral identifier. Are the address and peripheral identifier independent and potentially co-operational? Could you use PWM as the source peripheral as a clock divider but then copy from the port data address? If so, how might this be accomplished in terms of register writes (I ask to try to circumvent the need for trial and error); if not, is there any other way of sampling a memory location at regular high but sub-clock speeds?
in a PCIe configuration, devices have dedicated addresses and they send data in Peer-to-Peer mode to each other - every device can write when it wills and the switches take care to correctly pass data forward. There is no need to have a "bus master", which decides when and how data will be transmitted.
How does DMA come into play in such configuration? For me it seems that DMA is an outdated feature, which is not needed in a PCIe configuration. Every device can send data to the main memory, or read from it - obviously the main memory will always be the "slave" in such operations.
Or is there some other functionality of DMA, which I am missing?
Thank you in advance!
When a device other than a CPU accesses memory that is attached to a CPU, this is called direct memory access (DMA). So any PCIe read or write requests issued from PCIe devices constitute DMA operations. This can be extended with 'device to device' or 'peer to peer' DMA where devices perform reads and writes against each other without involving the CPU or system memory.
There are two main advantages of DMA: First, DMA operations can move data into and out of memory with minimal CPU load, improving software efficiency. Second, the CPU can only issue reads and writes of whatever the CPU word size is, which results in very poor throughput over the PCIe bus due to TLP headers and other protocol overheads. Devices directly issuing read and write requests can issue read and write operations with much larger payloads, resulting in higher throughput and more efficient use of the bus bandwidth.
So, DMA is absolutely not obsolete or outdated - basically all high-performance devices connected over PCIe will use DMA to use the bus efficiently.
I want to blink a led (toggle a GPIO pin) synchronously with an hardware timer configured to rise an interrupt at 50KHz on an ARM Cortex M4.
In my current code, I toggle one GPIO pin into one specific ISR handler triggered on a 50KHz external clock signal. The result is that the GPIO pin is toggled very erratically at random frequencies from 1KHz to 96KHz.
The Operating System is not running any other task apart from the Timer Tick Interrupt (100Hz with the lowest priority) the IDLE Task and my specific ISR handler.
Otherwise, this "toggling solution" is working perfectly with a Bare Metal implementation on the same MCU. So, my problem seems to come from my lack of knowledge in the FreeRTOS environment.
Is it feasible to toggle a LED into a FreeRTOS ISR at 50 KHz ?
Do I need to do it into a task waiting for the 50KHz interrupt signal
?
Should I create a timed toggling task at 50KHz and synchronize it
periodically with the external clock signal ?
If you want the 50K to be accurate (no jitter) the priority of the interrupt would have to be at or above configMAX_SYSCALL_INTERRUPT_PRIORITY: https://www.freertos.org/a00110.html#kernel_priority and https://www.freertos.org/RTOS-Cortex-M3-M4.html (assuming you are using a port that supports interrupt nesting).
I finally find out that the instruction cache was not enabled in the FreeRTOS implementation of my 50KHz ISR unlike in the Bare Metal version. It's now almost working as expected (some jitter remains).
In reply to my questions I suggest the following :
Is it feasible to toggle a LED into a FreeRTOS ISR at 50 KHz ?
Definitely yes. FreeRTOS can perform ISR at any frequencies. The feasibility only depends on the computational capablities of the MCU added to the instruction and data access performances. For sure FreeRTOS will add some delays compared to a Bare Metal implementation to process an ISR (ISR overhead and port efficiency) but once again, this will be more or less significant depending of the MCU's computational performances and the considerate frequency.
Do I need to do it into a task waiting for the 50KHz interrupt signal ?
Not necessarily. Anyway, this is a good alternative if an irregular or a rather important processing is needed. However the FreeRTOS deferring instructions will cost time and may be too much to end the processing before the next ISR. More resilient but less efficient. In my case at 50KHz, this cost too much time regarding the average amount of processing per ISR.
Should I create a timed toggling task at 50KHz and synchronize it periodically with the external clock signal ?
Not feasible unless reduce the system tick frequency significantly which will cause a severe loss of performance.
You could not get 50 KHz by FreeRTOS mechanisms.
Оf course, you can try but this bad idea. So you should change system tick at least 10uS(1 / 50KHz / 2 ) in this case your task will have small latency(it allows react "immediately"), but context switch ISR will be called very often and this reduces performance.
The right way is using hardware timer to generate the interrupt (for toggle GPIO) or use timer with PWM output in this case frequency accuracy depends on clock source.
In order to synchronize timer from external sources, you should use the external interrupt and additional timer with high accuracy (at least 10uS)
I am learning about the Microblaze processors and i don't really understand that while using the gpio functions.
This simply means that you have a second, independent GPIO on the same peripheral.
It's like having 2 different GPIO peripherals, but without the burden of allocating another one (with associated bus attachment logic duplication, etc..)
The Xilinx GPIO peripherals have always been like this, back from the OPB bus ones, to the PLB bus, and then, now, with the newest AXI bus peripherals.
You would have answered yourself by reading the peripheral documentation. (Hint: on Chapter "Register Space", page 10, you see a second set of registers, named "GPIO2_*", which are available only when "dual channel" is enabled)
Is it possible to implement IR receiver on android-things?
1st idea:
Use GPIO as input and try to buffer changes and then parse the buffer to decode a message.
findings:
GPIO listener mechanism is too slow to observe IR signal.
Another way is to read GPIO infinite loop. But all IR protocols strongly depend on time and java(dalvik) in this case is to less accurate.
2nd idea
Use UART
findings:
It seems to be possible to adjust baud rate to observe all bits of a message but UART API require to setup quantity of start bits etc. and this is a problem because IR protocols do not fit that schema.
IMHO at the moment, UART is the only path but it would be a huge workaround.
The overarching problem (as you've discovered) is that any non-realtime system will have difficulty parsing this input directly because of the timing constraints. This is a job best suited to a microcontroller where you can access a timer interrupt. Grab an inexpensive tinyAVR or PIC to manage the sensor for you.
You will also want to use a dedicated receiver sensor (you might already be doing this) to simplify parsing the signal. These sensors include a demodulator, which means you don't have to deal with 38kHz pulse signal and the input is converted into a more standard PWM wave.
I believe you can't process the IR signal in Java because the reading pulses would be quicker than the reading resolution-at least in a raspberry pi. To get faster gpio readings I'm confident you can do in c++ with ndk with the raspberry. Though it's not officially supported there's some tricks to enable it. See How to do GPIO on Android Things bypassing Java on how to write to gpio in c. From there it should be trivial to read in a tight loop. Though I would still try to hook the trigger from Java since so far I have no clear easy idea on how to write/install interrupts in c.