NodeMCU gpio triggering incorrectly - lua

I'm attempting to read IR information from a NodeMCU running Lua 5.1.4 from a master build as of 8/19/2017.
I might be misunderstanding how GPIO works and I'm having a hard time finding examples that relate to what I'm doing.
pin = 4
pulse_prev_time = 0
irCallback = nil
function trgPulse(level, now)
gpio.trig(pin, level == gpio.HIGH and "down" or "up", trgPulse)
duration = now - pulse_prev_time
print(level, duration)
pulse_prev_time = now
end
function init(callback)
irCallback = callback
gpio.mode(pin, gpio.INT)
gpio.trig(pin, 'down', trgPulse)
end
-- example
print("Monitoring IR")
init(function (code)
print("omg i got something", code)
end)
I'm triggering the initial interrupt on low, and then alternating from low to high in trgPulse. In doing so I'd expect the levels to alternate from 1 to 0 in a perfect pattern. But the output shows otherwise:
1 519855430
1 1197
0 609
0 4192
0 2994
1 589
1 2994
1 1198
1 3593
0 4201
1 23357
0 608
0 5390
1 1188
1 4191
1 1198
0 3601
0 3594
1 25147
0 608
1 4781
0 2405
1 3584
0 4799
0 1798
1 1188
1 2994
So I'm clearly doing something wrong or fundamentally don't understand how GPIO works. If this is expected, why are the interrupts being called multiple times if the low/high levels didn't change? And if this does seem wrong, any ideas how to fix it?

I'm clearly doing something wrong or fundamentally don't understand how GPIO works
I suspect it's a bit a combination of both - the latter may be the cause for the former.
My explanation may not be 100% correct from a mechanical/electronic perspective (not my world) but it should be enough as far as writing software for GPIO goes. Switches tend to bounce between 0 and 1 until they eventually settle for one. A good article to read up on this is https://www.allaboutcircuits.com/technical-articles/switch-bounce-how-to-deal-with-it/. The effect can be addressed with hardware and/or software.
Doing it with software usually involves introducing some form of delay to skip the bouncing signals as you're only interested in the "settled state". I documented the NodeMCU Lua function I use for that at https://gist.github.com/marcelstoer/59563e791effa4acb65f
-- inspired by https://github.com/hackhitchin/esp8266-co-uk/blob/master/tutorials/introduction-to-gpio-api.md
-- and http://www.esp8266.com/viewtopic.php?f=24&t=4833&start=5#p29127
local pin = 4 --> GPIO2
function debounce (func)
local last = 0
local delay = 50000 -- 50ms * 1000 as tmr.now() has μs resolution
return function (...)
local now = tmr.now()
local delta = now - last
if delta < 0 then delta = delta + 2147483647 end; -- proposed because of delta rolling over, https://github.com/hackhitchin/esp8266-co-uk/issues/2
if delta < delay then return end;
last = now
return func(...)
end
end
function onChange ()
print('The pin value has changed to '..gpio.read(pin))
end
gpio.mode(pin, gpio.INT, gpio.PULLUP) -- see https://github.com/hackhitchin/esp8266-co-uk/pull/1
gpio.trig(pin, 'both', debounce(onChange))
Note: delay is an empiric value specific to the sensor/switch!

Related

lua: global var vs table entry var

In lua when u have a function in a table, what is the difference between declaring a global variable within the function vs declaring the variable as an entry in the table (if any)? The variable is x in the example below.
i.e.
dog={x=33,
func=function(self)
self.x=self.x*self.x
end
}
cat={func=function()
x=33
x=x*x
end
}
In dog I can use the properties of self to call the function with dog:func() instead of dog.func(dog). But outside of that, is there anything performance-wise to take into consideration in choosing between the two? The examples work a bit different when called in a loop, but outside of that?
Well, I heard that the two first rules about optimization are "Don't do it!" and "Don't do it yet!".
There is an official document exposing some ways to optimize Lua code and I recommend it. The most important rule is to prefer local variables to global variables because global variables are 30% slower than local ones.
The first thing we can do with the previous code is to compile it and check the bytecode instructions to understand what happen at the execution time. I stored the first function inside "test-1.lua" and the second one in "test-2.lua".
> cat test-1.lua
dog={x=33,
func=function(self)
self.x=self.x*self.x
end
}
function TEST ()
dog:func()
end
> luac54 -l -s test-1.lua
#
#(part of output omitted for clarity)
#
# Function: dog.func
#
function <test-1.lua:2,4> (6 instructions at 0000000000768740)
1 param, 3 slots, 0 upvalues, 1 local, 1 constant, 0 functions
1 [3] GETFIELD 1 0 0 ; "x"
2 [3] GETFIELD 2 0 0 ; "x"
3 [3] MUL 1 1 2
4 [3] MMBIN 1 2 8 ; __mul
5 [3] SETFIELD 0 0 1 ; "x"
6 [4] RETURN0
#
# Function: TEST (function to call dog.func)
#
function <test-1.lua:7,9> (4 instructions at 00000000000a8a90)
0 params, 2 slots, 1 upvalue, 0 locals, 2 constants, 0 functions
1 [8] GETTABUP 0 0 0 ; _ENV "dog"
2 [8] SELF 0 0 1k ; "func"
3 [8] CALL 0 2 1 ; 1 in 0 out
4 [9] RETURN0
So, if we want to execute TEST 10 times, we will need to execute at least 10*(4+6) bytecode instructions, that's said 100 bytecode instructions.
> cat test-2.lua
cat={func=function()
x=x*x
end
}
x=33
function TEST ()
cat.func()
end
> luac54 -l -s test-2.lua
#
#(part of output omitted for clarity)
#
# Function: cat.func
#
function <test-2.lua:1,3> (6 instructions at 00000000001b87f0)
0 params, 2 slots, 1 upvalue, 0 locals, 1 constant, 0 functions
1 [2] GETTABUP 0 0 0 ; _ENV "x"
2 [2] GETTABUP 1 0 0 ; _ENV "x"
3 [2] MUL 0 0 1
4 [2] MMBIN 0 1 8 ; __mul
5 [2] SETTABUP 0 0 0 ; _ENV "x"
6 [3] RETURN0
#
# Function: TEST (function to call cat.func)
#
function <test-2.lua:8,10> (4 instructions at 00000000001b8a80)
0 params, 2 slots, 1 upvalue, 0 locals, 2 constants, 0 functions
1 [9] GETTABUP 0 0 0 ; _ENV "cat"
2 [9] GETFIELD 0 0 1 ; "func"
3 [9] CALL 0 1 1 ; 0 in 0 out
4 [10] RETURN0
So, if we want to execute TEST 10 times, we will need to execute at least 10*(4+6) bytecode instructions, that's said 100 bytecode instructions.... which is exactly the same as the first version!
Obviously, all the bytecode instructions does not take the same time to execute. Some instructions will spend much more time in the C runtime the other ones. The addition of two integer might be much faster than allocating a new table and initialize some fields. At that point, we could try to do a dirty-and-pointless microbenchmark to give us an idea.
One might copy and paste this code in a Lua interpreter:
> cat dirty-and-pointess-benchmark.lua
dog={x=33,
func=function(self)
self.x=self.x*self.x
end
}
cat={func=function()
x=x*x
end
}
x=33
function StartMeasure ()
StartTime = os.clock()
end
function StopMeasure (TestName)
local Duration = os.clock() - StartTime
print(string.format("%s: %f sec", TestName, Duration))
end
function DoTest1 (Count)
for Index = 1, Count do
dog:func()
end
end
function DoTest2 (Count)
for Index = 1, Count do
cat.func()
end
end
COUNT = 5000000000
StartMeasure()
DoTest1(COUNT)
StopMeasure("VERSION_1")
StartMeasure()
DoTest2(COUNT)
StopMeasure("VERSION_2")
This code give this results on my computer:
VERSION_1: 246.816000 sec
VERSION_2: 250.412000 sec
Obviously, the difference is probably negligible for the most of the programs. We should always try to spend more time on writing correct programs and less time to do micro-benchmarks.
The two code snippets do very different things. dog.func sets self.x to the square of its previous value. cat.func sets the global x to 1089. You can't really compare performance between two things whose functionality are so different.
First of all you should change
cat={func=function()
x=33
x=x*x
end
}
to
x=33
cat={func=function()
x=x*x
end
}
Now we have the same operations.
If I run both functions 10000 times I end up with cat.func() a few percent slower than dog:func()
This does not surprise as indexing locals is faster than indexing globals.
To speed up cat you could do something like this:
x=33
cat={func=function()
local _x = x
x = _x*_x
end
}
The fastest solution is probably
dog={x=33,
func=function(self)
local x = self.x
self.x = x*x
end
}
and you could even gain more speed by making your tables and x local.
Usually you don't win anything significant doing things like that.
Premature optimiziation is a big no-no and you should ask yourself what problem you're actually trying to solve.
It also doesn't make sense to squeeze the last percent out of your code if you do not even know enough Lua to write a simple benchmark for your code... just a thought.

lua crash after some time running when using wait() inside of while loop

I am using SIMCOM 5320a chipset
I have a bare bone code with wait() function using os.clock() and a while loop with counter counting up.
However, After counting up to a certain number (around 1 minute), the program crash and the board restart itself.
My question is:
is this violate any rule when using while nest loop like this ?
Is there any replacement for using sleep/wait function in lua. Simcom has vmsleep but it will crash after roughly 7 hours of running.
I have try multiple sleep/wait function
I reduce my code to bare bone in order to debug
function sleep (a)
local sec = tonumber(os.clock() + (a/1000));
while (os.clock() < sec) do
end
end
function main()
local count=0
while (count < 130) do
count = count + 1
print("Nothing yet..".. count .. "\r\n");
sleep(2000)
end
end
main()

What is causing my Nodemcu/ESP 8266 to reset?

I am using a sensor similar to hall effect sensor to count the number of interrupts. After some random time, usually after it has been ON for 1-2 hours it resets and followed by random resets at random intervals.
counter = 0;
sampletime = 0;
lastrisetime = tmr.now()
pin = 2
do
gpio.mode(pin, gpio.INT)
local function rising(level)
-- to eliminate multiple counts during a short period (.5 second) difference is taken
if ((tmr.now() - lastrisetime) > 500000) then
lastrisetime = tmr.now();
end
-- when tmr.now() resets to zero this takes into account that particular count
if ((tmr.now() - lastrisetime) < 0) then
lastrisetime = tmr.now();
end
end
local function falling(level)
if ((tmr.now() - lastrisetime) > 500000) then
-- Only counted when the pin is on falling
-- It is like a sine curve so either the peak or trough is counted
counter = counter + 1;
print(counter)
lastrisetime = tmr.now();
sampletime = lastrisetime;
end
-- when tmr.now() resets to zero this takes into account that particular count
if ((tmr.now() - lastrisetime) < 0) then
lastrisetime = tmr.now();
counter = counter + 1;
print(counter)
end
end
gpio.trig(pin, "up", rising)
gpio.trig(pin, "down", falling)
end
This is the error I get on CoolTerm, also I checked for memory every couple of hours and you can see the results there.
NodeMCU 0.9.6 build 20150704 powered by Lua 5.1.4
> Connecting...
connected
print(node.heap())
22920
> print(node.heap())
22904
> print(node.heap())
22944
> print(node.heap())
22944
> 2. .print(node.heap())
22944
> print(node.heap())
22944
> ∆.)ç˛.䂸 ã ¸#H7.àåË‘
NodeMCU 0.9.6 build 20150704 powered by Lua 5.1.4
> Connecting...
connected
print(node.heap())
21216
> F.)ç˛.¶Ùå¶1.#H  .ÊÍ
NodeMCU 0.9.6 build 20150704 powered by Lua 5.1.4
> Connecting...
connected
H!໩.ä‚D.ã ¸å¶H.åb‘
NodeMCU 0.9.6 build 20150704 powered by Lua 5.1.4
> Connecting...
connected
print(node.heap())
22904
> print(node.heap())
21216
>
Thank you for taking the time to read this. Appreciate your input.
Possible watchdog timer issue.
In your interrupt service routine looks like you are waiting too much.
Better to remove timing operations from there, just set a flag and in another loop, check the flag status and complete your timing operations.
NodeMCU 0.9.6 build 20150704 powered by Lua 5.1.4
The first thing really is to use a recent version of the NodeMCU firmware. 0.9.x is ancient, contains lots of bugs and is no longer supported. See here https://github.com/nodemcu/nodemcu-firmware/#releases
lastrisetime = tmr.now()
The real issue is that tmr.now() rolls over at 2147 seconds I believe. I learned about this when I worked on a proper debounce function.
-- inspired by https://github.com/hackhitchin/esp8266-co-uk/blob/master/tutorials/introduction-to-gpio-api.md
-- and http://www.esp8266.com/viewtopic.php?f=24&t=4833&start=5#p29127
local pin = 4 --> GPIO2
function debounce (func)
local last = 0
local delay = 50000 -- 50ms * 1000 as tmr.now() has μs resolution
return function (...)
local now = tmr.now()
local delta = now - last
if delta < 0 then delta = delta + 2147483647 end; -- proposed because of delta rolling over, https://github.com/hackhitchin/esp8266-co-uk/issues/2
if delta < delay then return end;
last = now
return func(...)
end
end
function onChange ()
print('The pin value has changed to '..gpio.read(pin))
end
gpio.mode(pin, gpio.INT, gpio.PULLUP) -- see https://github.com/hackhitchin/esp8266-co-uk/pull/1
gpio.trig(pin, 'both', debounce(onChange))

Problems interpreting a square wave

I'm trying to use an ESP8266 SoC to read a water flow sensor that is said to produce a square wave as output. I thought it would be a simple matter of using a GPIO port in interrupt mode, to count rising edge transitions -- and in fact that initially seemed to work. Then I upgraded the firmware from 0.96 to 1.5 and it has since ceased to work, I see no transitions when the wheel spins anymore.
However, if I run a wire to the pin [for the GPIO I'm using] and touch it to VCC momentarily, the interrupt routine is called as expected, so I know the sensor is wired to the right pin, and the interrupt routine is registered correctly. My code:
function intCb(level)
SpinCount = SpinCount + 1
local levelString = "up"
if level == gpio.HIGH then
levelString = "down"
end
gpio.trig(pin, levelString, intCb)
end
gpio.write(pin, 0)
gpio.trig(pin, "up", intCb)
gpio.mode(pin, gpio.INT, gpio.FLOAT)
So what am I missing? Do I need more support circuitry to read a square wave as input? If so then how did it work initially?
For anything that involves hardware it's really hard to give a definite answer here on SO. In most cases one bases it on hints (and hunches sometimes). A few ideas:
gpio.FLOAT should probably be gpio.PULLUP instead (unless you have an external pull-up resistor).
Your setup doesn't seem to be fundamentally different from e.g. using a push button or a switch to trigger some event. Hence, you probably want to use some kind of debounce or throttle function.
Since you seem to be interested in both rising and falling edges (as you switch between up and down) you might just as well listen for both, no?
So, assuming I drew the right conclusions something like the following generic skeleton may prove to be useful:
-- inspired by https://github.com/hackhitchin/esp8266-co-uk/blob/master/tutorials/introduction-to-gpio-api.md
-- and http://www.esp8266.com/viewtopic.php?f=24&t=4833&start=5#p29127
local pin = 4 --> GPIO2
function debounce (func)
local last = 0
local delay = 5000
return function (...)
local now = tmr.now()
local delta = now - last
-- if delta < 0 then delta = delta + 2147483647 end; proposed because of delta rolling over
if delta < delay then return end;
last = now
return func(...)
end
end
function onChange ()
print('The pin value has changed to '..gpio.read(pin))
end
gpio.mode(pin, gpio.INT, gpio.PULLUP) -- see https://github.com/hackhitchin/esp8266-co-uk/pull/1
gpio.trig(pin, 'both', debounce(onChange))
I solved this using a 555 timer chip as a schmitt trigger:

NodeMCU/Lua performance issues

I'm adding some code to the ws2812 module to be able to have some kind of reusable buffer where we could store led values.
The current version is there.
I've two problems.
First I wanted to have some "OO-style" interface. So I did:
local buffer = ws2812.newBuffer(300);
for j = 0,299 do
buffer:set(j, 255, 255, 255)
end
buffer:write(pin);
The probleme here is that buffer:set is resolved at each loop turn, which is costly (this loop takes ~20.2ms):
8 [2] FORPREP 1 6 ; to 15
9 [3] SELF 5 0 -7 ; "set"
10 [3] MOVE 7 4
11 [3] LOADK 8 -8 ; 255
12 [3] LOADK 9 -8 ; 255
13 [3] LOADK 10 -8 ; 255
14 [3] CALL 5 6 1
15 [2] FORLOOP 1 -7 ; to 9
I found a workaround for this problem which doesn't look "nice":
local buffer = ws2812.newBuffer(300);
local set = getmetatable(buffer).set;
for j = 0,299 do
set(buffer, j, 255, 255, 255)
end
buffer:write(pin);
It works well (4.3ms for the loop, more than 4 times faster), but it's more like a hack. :/ Is there a better way to "cache" the buffer:set resolution?
Second question, in my C code, I use:
ws2812_buffer * buffer = (ws2812_buffer*)luaL_checkudata(L, 1, "ws2812.buffer");
Which gives back my buffer ptr and check if it is really a ws2812.buffer. But this call is sloooooow: on my ESP8266, ~50us. If it's done on each call (for my 300 time buffer:set for example), it's ~15ms!
Is there a better way to fetch some user data and check its type, or should I add some "canary" at the beginning of my structure to do my own check (which will almost be "free" compared to 50us...)?
To make it look less of a hack you could try using
local set = buffer.set
This is essentially the same code, but without the getmetatable as the metatable is used implicitly through the __index metamethod.
On our project we made our own implementation of luaL_checkudata.
One option - as you similarly suggested - was to use a wrapper object that holds the type. As all userdata was assumed to be wrapped in the wrapper we could use it to get and confirm the type of the userdata. But there was no benchmarking done and testing metatables was used instead.
I would say testing the metatables is slower than the wrapping since luaL_checkudata does a lot of work to get and test the metatables and with wrapping we have access to the type directly. However benchmarking will tell for sure.

Resources