I use FreeRTOS on a EFM32GG380F1024. The Cortex M SysTick is used for the RTOS tick, the low energy rtc (BURTC) is used during sleep to generate timed wakeup calls. Energy Mode is EM3 (Just Ultra-Low-Frequency still operating).
As soon as Freertos calls me with the "suppressTicksAndSleep" callback, i do as follows:
Enter Critical Section (Globally disable IRQs) with the call "__disable_irq()"
I disable (AT LEAST I TRY; WONT WORK CURRENTLY) the Systick Interrupt with the call to the register "SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk | SysTick_CTRL_ENABLE_Msk;"
I setup and start the LowEnergy RTC (BURTC)
Entering EM3
The problem is, that just after the energy mode entrance, the SysTick interrupt kicks in an wakes the device:
This should not be possible for the reason:
the Energy Mode 3 disables HF and LF clocks, so the Systick counter should not even increment
Can someone help out? Why is this not suspending the Systick correctly?
Have a look at the screenshot of my tracealyzer:
https://imgur.com/a/8PQ9SSb
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk | SysTick_CTRL_ENABLE_Msk
You are not clearing any bits in CTRL. That line should probably be like
SysTick->CTRL &= ~(SysTick_CTRL_CLKSOURCE_Msk | SysTick_CTRL_ENABLE_Msk)
to clear all bits for CLKSOURCE and ENABLE.
Related
I am working on hypervisor with raspberry pi 4B board.
when I study interrupt virtualization, I encounter a problem, I am follow this document
[url]https://developer.arm.com/documentation/102142/0100/Virtualizating-exceptions[/url]
[quote]There are two mechanisms for generating virtual interrupts:
1 Internally by the core, using controls in HCR_EL2.
2 Using a GICv2, or later, interrupt controller.
[/quote]
I use method 1. Everything worked fine, I can route IRQ to my EL2 code, and I can forward it to EL1 Linux kernel.
But, when I tested, I tried to disable IRQ from EL1, use "msr daifset, #0xf", after this, IRQ will not trigger to EL2 also.
I am confused, because the document above said pstate.I will only affect vIRQ(for EL1) not pIRQ(EL2). I tested a GPIO interrupt and IPI interrupt, both failed.
I search the web, there are few article on this topic, and can't find any additional settings.
All document I found, said set I bit in EL1, will not affect EL2/3.
Thanks, if anyone can help.
There are two concepts involved in interrupt handling, interrupt routing and interrupt masking. Setting PSTATE.DAIF to 1, will mask the interrupt(but there are some conditions) and HCR_EL2.AMO,IMO,FMO will route the interrupt to EL2 from EL1 or EL0.
If you are at EL1/EL0, HCR_EL2.{IMO,FMO,AMO} are set to one then interrupts cannot be masked using PSTATE.DAIF bits. In this case interrupt will be delivered to EL2. Here it's assumed HCR_EL2.{E2H, TGE} are zero.
If you are at EL1/EL0, HCR_EL2.{IMO,FMO,AMO} are set to zero, then interrupts can be masked using PSTATE.DAIF bits.
If you are at EL2, HCR_El2.{IMO,FMO,AMO} are set to zero, irrespective of PSTATE.DAIF bits interrupt will not be delivered. If these are set to one then PSTATE.DAIF bits can mask the interrupt.
Similarly there are controls in SCR_EL3.{EA,IRQ,FIQ} which when set to one will route the interrupt to EL3 from EL2, EL1 or EL0. Above rules hold good here as well.
It seems that when an I/O pin interrupt occurs while network I/O is being performed, the system resets -- even if the interrupt function only declares a local variable and assigns it (essentially a do-nothing routine.) So I'm fairly certain it isn't to do with spending too much time in the interrupt function. (My actual working interrupt functions are pretty spartan, strictly increment and assign, not even any conditional logic.)
Is this a known constraint? My workaround is to disconnect the interrupt while using the network, but of course this introduces potential for data loss.
function fnCbUp(level)
lastTrig = rtctime.get()
gpio.trig(pin, "down", fnCbDown)
end
function fnCbDown(level)
local spin = rtcmem.read32(20)
spin = spin + 1
rtcmem.write32(20, spin)
lastTrig = rtctime.get()
gpio.trig(pin, "up", fnCbUp)
end
gpio.trig(pin, "down", fnCbDown)
gpio.mode(pin, gpio.INT, gpio.FLOAT)
branch: master
build built on: 2016-03-15 10:39
powered by Lua 5.1.4 on SDK 1.4.0
modules: adc,bit,file,gpio,i2c,net,node,pwm,rtcfifo,rtcmem,rtctime,sntp,tmr,uart,wifi
Not sure if this should be an answer or a comment. May be a bit long for a comment though.
So, the question is "Is this a known constraint?" and the short but unsatisfactory answer is "no". Can't leave it like that...
Is the code excerpt enough for you to conclude the reset must occur due to something within those few lines? I doubt it.
What you seem to be doing is a simple "global" increment of each GPIO 'down' with some debounce logic. However, I don't see any debounce, what am I missing? You get the time into the global lastTrig but you don't do anything with it. Just for debouncing you won't need rtctime IMO but I doubt it's got anything to do with the problem.
I have a gist of a tmr.delay-based debounce as well as one with tmr.now that is more like a throttle. You could use the first like so:
GPIO14 = 5
spin
function down()
spin = spin + 1
tmr.delay(50) -- time delay for switch debounce
gpio.trig(GPIO14, "up", up) -- change trigger on falling edge
end
function up()
tmr.delay(50)
gpio.trig(GPIO14, "down", down) -- trigger on rising edge
end
gpio.mode(GPIO14, gpio.INT) -- gpio.FLOAT by default
gpio.trig(GPIO14, "down", down)
I also suggest running this against the dev branch because you said it be related to network I/O during interrupts.
I have nearly the same problem.
Running ESP8266Webserver, using GPIO14 Interrupt, with too fast Impulses as input ,
the system stopps recording the interrupts.
Please see here for more details.
http://www.esp8266.com/viewtopic.php?f=28&t=9702
I'm using ARDUINO IDE 1.69 but the Problem seems to be the same.
I used an ESP8266-07 as generator & counter (without Webserver)
to generate the Pulses, wired to my ESP8266-Watersystem.
The generator works very well, with much more than 240 puls / sec,
generating and counting on the same ESP.
But the ESP-Watersystem, stops recording interrupts here at impuls > 50/ second:
/*************************************************/
/* ISR Water pulse counter */
/*************************************************/
/**
* Invoked by interrupt14 once per rotation of the hall-effect sensor. Interrupt
* handlers should be kept as small as possible so they return quickly.
*/
void ICACHE_RAM_ATTR pulseCounter()
{
// Increment the pulse Counter
cli();
G_pulseCount++;
Serial.println ( "!" );
sei();
}
The serial output is here only for showing whats happening.
It shows the correct counted Impuls, until the webserver interacts with the network.
Than is seams the Interrupt is blocked.(no serial output from here)
By stressing the System, when I several times refresh the Website in an short time,
the interrupt counting starts for an short time, but it stops short time again.
The problem is anywhere along Interrupt handling and Webservices.
I hope I could help to find this issues.
Interessted in getting some solutions.
Who can help?
Thanks from Mickbaer
Berlin Germany
Email: michael.lorenz#web.de
I need to calculate power consumption of CPU. According to this formula.
Power(mW) = cpu * 1.8 / time.
Where time is the sum of cpu + lpm.
I need to measure at the start of certain process and at the end, however the time passed it is to short, and cpu don't change to lpm mode as seen in the next values taken with powertrace_print().
all_cpu all_lpm all_transmit all_listen
116443 1514881 148 1531616
17268 1514881 148 1532440
Calculating power consumption of cpu I got 1.8 mW (which is exactly the value of current draw of CPU in active mode).
My question is, how calculate power consumption in this case?
If MCU does not go into a LPM, then it spends all the time in active mode, so the result of 1.8 mW you get looks correct.
Perhaps you want to ask something different? If you want to measure the time required to execute a specific block of code, you can add RTIMER_NOW() calls at the start and end of the block.
The time resolution of RTIMER_NOW may be too coarse for short-time operations. You can use a higher frequency timer for that, depending on your platform, e.g. read the TBR register for timing if you're compiling for a msp430 based sensor node.
I use an ESP8266 dev board from NodeMCU with Lua. I power my chip with two AA batteries, which gives me 3V. See this:
https://www.hackster.io/noelportugal/ifttt-smart-button-e11841
How do I check the battery status using NodeMCU?
With a recent firmware you can use adc.readvdd33(). That should be enough for your case
I read somewhere that adc.readvdd33() was deprecated? Effectively it is for many of the ESP8266 modules available, the docs say, "If the ESP8266 has been configured to use the ADC for sampling the external pin, this function will always return 65535". So that means that any ESP8266 that has an ADC pin (like ESP8266-07 or -12, etc.) has this shunted in firmware.
But by adding a couple of resistors to make a voltage divider, you can still use the ADC pin for this.
[![schematics][1]][1]
[1]: http://i.stack.imgur.com/FEILF.png
Those resistor values will allow it to read 0-12V, as a value between 0-1024. (The voltage at the ADC pin must be less than 1V.)
val = adc.read(0)
Addendum: Adding this to your circuit incurs a power draw of approx. 0.01 milliamps, small but more than nothing. Multiply the values by 1000 to reduce it to infinitesimal. Or use 18 megaohm for r1 and 2 megaohm for r2, which divides the voltage by 10, and (wild guess) drains less current than most if not all batteries will attenuate when disconnected.
I am trying to drive down the current consumption of the contiki os running on the CC2538 development kit.
I would like to operate the device from a CR2032 with a run life of 2 years. To achieve this I would need an average current less than 100uA.
However when I run the following at 3V, I get the following results:
contiki/examples/hello-world = 0.4mA - 2mA
contiki/examples/er-rest-example/er-example-client = 27mA
contiki/examples/er-rest-example/er-example-server = 27mA
thingsquare websocket example = 4mA
I have also designed my own target platform based on the cc2538 and get similar results.
I have read the guide at https://github.com/contiki-os/contiki/blob/648d3576a081b84edd33da05a3a973e209835723/platform/cc2538dk/README.md
and have ensured that in the contiki-conf.h file:
- LPM_CONF_ENABLE 1
- LPM_CONF_MAX_PM 2
Can anyone give me some pointers as to how I can get the current down. It would be most appreciated.
Regards,
Shane
How did you measure the current?
You have to be aware that using a basic ampere meter to measure the current consumption of contiki-os wouldn't give you relevant results. The system is turning on/off the radio at a relative high rate (8Hz by default) in order to perform the CCA. This might not be very easy to catch for an ampere meter.
To have an idea of the current consumption when the device is in deep sleep (and then make calculation to determine the averaged current consumption), I'd rather put the device in the PM state before the program reach the infinite while loop. I used the following code to do that:
lpm_enter();
REG(SYS_CTRL_PMCTL) = SYS_CTRL_PMCTL_PM2;
do { asm("wfi"::); } while(0);
leds_on(LEDS_RED); // should not reach here
while(1){
...
On the CC2538, the CCA check consumes about 10-15mA and last approximately 2ms. When the radio transmit a packet, it consume 25mA. Have a look at this post: Contiki UDP packet transmission duration with CC2538.
Furthermore, to save a little more current, turn off the serial com:
#define CC2538_CONF_QUIET 1
Are you using the SmartRF board? If you want to make proper current measurement with this board, you have to remove every jumpers: P486, P487, P411 and P408. Keep only the jumpers of BTN_SEL and the RESET signals.