It seems that when an I/O pin interrupt occurs while network I/O is being performed, the system resets -- even if the interrupt function only declares a local variable and assigns it (essentially a do-nothing routine.) So I'm fairly certain it isn't to do with spending too much time in the interrupt function. (My actual working interrupt functions are pretty spartan, strictly increment and assign, not even any conditional logic.)
Is this a known constraint? My workaround is to disconnect the interrupt while using the network, but of course this introduces potential for data loss.
function fnCbUp(level)
lastTrig = rtctime.get()
gpio.trig(pin, "down", fnCbDown)
end
function fnCbDown(level)
local spin = rtcmem.read32(20)
spin = spin + 1
rtcmem.write32(20, spin)
lastTrig = rtctime.get()
gpio.trig(pin, "up", fnCbUp)
end
gpio.trig(pin, "down", fnCbDown)
gpio.mode(pin, gpio.INT, gpio.FLOAT)
branch: master
build built on: 2016-03-15 10:39
powered by Lua 5.1.4 on SDK 1.4.0
modules: adc,bit,file,gpio,i2c,net,node,pwm,rtcfifo,rtcmem,rtctime,sntp,tmr,uart,wifi
Not sure if this should be an answer or a comment. May be a bit long for a comment though.
So, the question is "Is this a known constraint?" and the short but unsatisfactory answer is "no". Can't leave it like that...
Is the code excerpt enough for you to conclude the reset must occur due to something within those few lines? I doubt it.
What you seem to be doing is a simple "global" increment of each GPIO 'down' with some debounce logic. However, I don't see any debounce, what am I missing? You get the time into the global lastTrig but you don't do anything with it. Just for debouncing you won't need rtctime IMO but I doubt it's got anything to do with the problem.
I have a gist of a tmr.delay-based debounce as well as one with tmr.now that is more like a throttle. You could use the first like so:
GPIO14 = 5
spin
function down()
spin = spin + 1
tmr.delay(50) -- time delay for switch debounce
gpio.trig(GPIO14, "up", up) -- change trigger on falling edge
end
function up()
tmr.delay(50)
gpio.trig(GPIO14, "down", down) -- trigger on rising edge
end
gpio.mode(GPIO14, gpio.INT) -- gpio.FLOAT by default
gpio.trig(GPIO14, "down", down)
I also suggest running this against the dev branch because you said it be related to network I/O during interrupts.
I have nearly the same problem.
Running ESP8266Webserver, using GPIO14 Interrupt, with too fast Impulses as input ,
the system stopps recording the interrupts.
Please see here for more details.
http://www.esp8266.com/viewtopic.php?f=28&t=9702
I'm using ARDUINO IDE 1.69 but the Problem seems to be the same.
I used an ESP8266-07 as generator & counter (without Webserver)
to generate the Pulses, wired to my ESP8266-Watersystem.
The generator works very well, with much more than 240 puls / sec,
generating and counting on the same ESP.
But the ESP-Watersystem, stops recording interrupts here at impuls > 50/ second:
/*************************************************/
/* ISR Water pulse counter */
/*************************************************/
/**
* Invoked by interrupt14 once per rotation of the hall-effect sensor. Interrupt
* handlers should be kept as small as possible so they return quickly.
*/
void ICACHE_RAM_ATTR pulseCounter()
{
// Increment the pulse Counter
cli();
G_pulseCount++;
Serial.println ( "!" );
sei();
}
The serial output is here only for showing whats happening.
It shows the correct counted Impuls, until the webserver interacts with the network.
Than is seams the Interrupt is blocked.(no serial output from here)
By stressing the System, when I several times refresh the Website in an short time,
the interrupt counting starts for an short time, but it stops short time again.
The problem is anywhere along Interrupt handling and Webservices.
I hope I could help to find this issues.
Interessted in getting some solutions.
Who can help?
Thanks from Mickbaer
Berlin Germany
Email: michael.lorenz#web.de
Related
I am trying to write a macro that combines /use on a trinket then calls the WoW API and outputs some data.
/use 14
/run local d=string.format("%.2f",GetDodgeChance()); print("Dodge Trinket used, dodge at:",d);
This works fine apart from there seems to be a timing problem with using the trinket and then getting the updated dodge chance from the API. When I click once, the trinket activates it shows the dodge chance without the trinket buff. If I click immediately again it then shows the correct value, including the buff.
Is there some timing issue, such as the first /use command not firing until the macro ends? How do I ensure the GetDodgeChance() call includes the trinket buff?
Your game client needs to send the trinket use to the server, and get a confirmation back that you received the buff which changes your stats.
However, execution of code can be delayed with C_Timer.After
Your example would become:
/use 14
/run C_Timer.After( 0.5, function() local d=string.format("%.2f",GetDodgeChance()); print("Dodge Trinket used, dodge at:",d) end)
with half a second delay, which should both be long enough for the Dodge function to return the correct (=anticipated) value, and appear as "almost instant" to human perception.
You can of course tweak that value, try shorter delays - e.g. 0.1 seconds should also be quite long enough for the calculation to be correct, but as this involves communicating with the server, it might work fine 99% of the time for a very short value of delay, and then there's a random minor lag spike, not even something you notice in gameplay, but the info about the buff and changed dodge chance reaches your client a few milliseconds too late.
when I try to set a timeout in Lua like this:
function wait(seconds)
local start = os.time()
repeat until os.time() > start + seconds
end
it is too inconsistent. Is there a more precise way to set a timeout that will consistently wait for the amount of time requested?
Without using external libraries, in Lua there are basically only two ways to get higher accuracy.
use os.clock
run a loop and find out how many iterations take how much time
As Nicol pointed out both will be inconsistent and unrealiable in a non real-time OS like Windows.
If your OS for whatever reason decides to clock your CPU down or to do some other crap in the background your doomed.
So think about your application and decide wether you should do it on a non-real-time OS.
On a machine where you can compile own stuff theres a way to build a sleep and msleep that dont eats up any cpu time...
Look: http://www.troubleshooters.com/codecorn/lua/lua_lua_calls_c.htm
...for: Make an msleep() Function
I am trying to drive down the current consumption of the contiki os running on the CC2538 development kit.
I would like to operate the device from a CR2032 with a run life of 2 years. To achieve this I would need an average current less than 100uA.
However when I run the following at 3V, I get the following results:
contiki/examples/hello-world = 0.4mA - 2mA
contiki/examples/er-rest-example/er-example-client = 27mA
contiki/examples/er-rest-example/er-example-server = 27mA
thingsquare websocket example = 4mA
I have also designed my own target platform based on the cc2538 and get similar results.
I have read the guide at https://github.com/contiki-os/contiki/blob/648d3576a081b84edd33da05a3a973e209835723/platform/cc2538dk/README.md
and have ensured that in the contiki-conf.h file:
- LPM_CONF_ENABLE 1
- LPM_CONF_MAX_PM 2
Can anyone give me some pointers as to how I can get the current down. It would be most appreciated.
Regards,
Shane
How did you measure the current?
You have to be aware that using a basic ampere meter to measure the current consumption of contiki-os wouldn't give you relevant results. The system is turning on/off the radio at a relative high rate (8Hz by default) in order to perform the CCA. This might not be very easy to catch for an ampere meter.
To have an idea of the current consumption when the device is in deep sleep (and then make calculation to determine the averaged current consumption), I'd rather put the device in the PM state before the program reach the infinite while loop. I used the following code to do that:
lpm_enter();
REG(SYS_CTRL_PMCTL) = SYS_CTRL_PMCTL_PM2;
do { asm("wfi"::); } while(0);
leds_on(LEDS_RED); // should not reach here
while(1){
...
On the CC2538, the CCA check consumes about 10-15mA and last approximately 2ms. When the radio transmit a packet, it consume 25mA. Have a look at this post: Contiki UDP packet transmission duration with CC2538.
Furthermore, to save a little more current, turn off the serial com:
#define CC2538_CONF_QUIET 1
Are you using the SmartRF board? If you want to make proper current measurement with this board, you have to remove every jumpers: P486, P487, P411 and P408. Keep only the jumpers of BTN_SEL and the RESET signals.
I am trying to find out a way to increase the computation time of a function to 1 second without using the sleep function in xilinx microblaze, using the xilkernel.
Hence, may i know how many iterations do i need to do in a simple for loop to increase the computation time to 1 second?
You can't do this reliably and accurately. If you want do a bodge like this, you'll have to calibrate it yourself for your particular system as Microblaze is so configurable, there isn't one right answer. The bodgy way is:
Set up a GPIO peripheral, set one of the pins to '1', run a loop of 1000 iterations (make sure the compiler doesn't optimise it away!) set the pins to '0'. Hang a scope off that pin (you're doing work on embedded systems, you do have a scope, right?) and see how long it takes to run the loop.
But the right way to do it is to use a hardware timer peripheral. Even at a very simple level, you could clear the timer at the start of the function, then poll it at the end until it reaches whatever value corresponds to 1 second. This will still have some imperfections, but given that you haven't specified how close to 1 sec you need to be, it is probably adequate.
I've been writing some scripts for a game, the scripts are written in Lua. One of the requirements the game has is that the Update method in your lua script (which is called every frame) may take no longer than about 2-3 milliseconds to run, if it does the game just hangs.
I solved this problem with coroutines, all I have to do is call Multitasking.RunTask(SomeFunction) and then the task runs as a coroutine, I then have to scatter Multitasking.Yield() throughout my code, which checks how long the task has been running for, and if it's over 2 ms it pauses the task and resumes it next frame. This is ok, except that I have to scatter Multitasking.Yield() everywhere throughout my code, and it's a real mess.
Ideally, my code would automatically yield when it's been running too long. So, Is it possible to take a Lua function as an argument, and then execute it line by line (maybe interpreting Lua inside Lua, which I know is possible, but I doubt it's possible if all you have is a function pointer)? In this way I could automatically check the runtime and yield if necessary between every single line.
EDIT:: To be clear, I'm modding a game, that means I only have access to Lua. No C++ tricks allowed.
check lua_sethook in the Debug Interface.
I haven't actually tried this solution myself yet, so I don't know for sure how well it will work.
debug.sethook(coroutine.yield,"",10000);
I picked the number arbitrarily; it will have to be tweaked until it's roughly the time limit you need. Keep in mind that time spent in C functions etc will not increase the instruction count value, so a loop will reach this limit far faster than calls to long-running C functions. It may be viable to set a far lower value and instead provide a function that sees how much os.clock() or similar has increased.