NodeMCU traceback on reboot - lua

I have an embedded application running NodeMCU that is not connected to a console as the UART has been repurposed to obtain serial data from an attached device.
During testing the application ran for about 15 hours then rebooted 5 times in a row before "settling" and continuing to run correctly.
Is it possible to log to a file a traceback of what caused the reboots? I am assuming some kind of PANIC error caused the reboot. I don't think it is a memory issue as the application reports the heap size (via http to a local server) every 30 seconds. Here is a log extract:
Wed May 18 00:46:37 2016 -> '{"s":"1782","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:08 2016 -> '{"s":"1783","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:39 2016 -> '{"s":"1784","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:48:19 2016 -> '{"s":"1785","i":"1afe34d26348", "d":"heap=11432
Wed May 18 00:50:06 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:51:25 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:52:45 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:54:04 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:55:24 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14608
Wed May 18 00:55:55 2016 -> '{"s":"1","i":"1afe34d26348", "d":"heap=12608
Wed May 18 00:56:26 2016 -> '{"s":"2","i":"1afe34d26348", "d":"heap=12600
Wed May 18 00:56:56 2016 -> '{"s":"3","i":"1afe34d26348", "d":"heap=12624
Wed May 18 00:57:27 2016 -> '{"s":"4","i":"1afe34d26348", "d":"heap=12600
I the above log "s" is a sequential counter that is reset to 0 when the device reboots, and "d" is the heap size (you can ignore the "i" entry, it is just the MAC address of the device that is sending the data).
xpcall won't work in the case of a PANIC device reset.
I tried logging node.bootreason() to a file on reboot, but it doesn't contain a traceback to where the error occurred.
Is there some method for troubleshooting nodemcu applications that aren't connected to a console?

Related

FreeRADIUS 4 : EAP-Request failure

I'm conducting an operation test of eap-aka certification with FreeRADIUS 4.
I want to set EAP-Request / AKA-Identity to return AT_PERMANENT_ID_REQ (the value set in FreeRadius is Permanent-Id-Req),
but in the case of Permanent-Id-Req, even if the specified value is set I get an error.
If I set Any-Id-Req and FullAuth-Id-Req, both values are returned successfully.
Set value
[OK] request_identity = Any-Id-Req
[OK] request_identity = FullAuth-Id-Req
[Failure] request_identity = Permanent-Id-Req
What settings should I make to ensure that the Permanent-Id-Req also returns the value correctly?
The failure log is below.
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: request_identity {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: strip_permanent_identity_hint {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: ephemeral_id_length
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: protected_success {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: eap-aka {
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: request_identity = Permanent-Id-Req
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[159]: Invalid value "Permanent-Id-Req". Expected one of 'Any-Id-Req', 'FullAuth-Id-Req', 'Init', 'Permanent-Id-Req', 'no', 'none'
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Failed evaluating configuration for module "process_eap_aka"
Thanks for your help!

How to build up a ML model for text classification which contains none 'natural language'?

I am looking for a model for text classification for our log notes analytics.
The challenge is that each note may contain none 'natural language' texts. For example, some notes are thread backtrace output with symbols, some notes are logging information from source code. And among these notes, some notes with description of how customer is using our product are the ones we want to classify.
Is there any ML model or approach that I could use for this text classification?
Below are some examples for different notes (I changed some content so no company confidential materials are shown):
backtrace info developer pasted for bug analysis:
func118 4563453 344 = SYSTEM_FUNC_1 0x00000efa34343 0x0000000009f333a0 0xffe3ebdfd700 <<<<<
Total of 1 API working thread(s)
(gdb) thread find 0x123456
Thread 670 has target id 'Thread 0x123456 (LWP 443)'
(gdb) t 670
[Switching to thread 670 (Thread 0x123456 (LWP 443))]
#0 0x35353453563abcd in __lock_func1_ ()
from /disks/folder1/xxx/xxx_folder1/info_folder/info2_dir/lib64/libpthread.so.0
(gdb) ebt
#0 __lock_func1_()
#1 _LOCK_F_10()
#2 func_mod_4()
#3 func_mod_5()
#4 ModCon::disconnect()
#5 ModCon::abort()
#6 ModServ::disconnect()
#7 ModServManager::disconnect()
#8 mod1::func1()
#9 mod1::func2()
Product log for issue analysis:
cpu/MOD/MOD2/log/
start_mod.log:
Thu Dec 24 00:01:12 UTC 2019 FUN: HG: FILE_A: stopping
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: stopping, timeout -22-
Thu Dec 24 00:01:12 UTC 2019 system-state: cleared FILE_A_start_complete
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: run thread still running: con_b.pl FUN_run 0
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: calling con_b.pl FUN_cleanup 0, time left: -160-
Thu Dec 24 00:01:12 2019 cli: con_a.pl: FUN_cleanup for FILE_A
Thu Dec 24 00:01:12 2019 cmd: con_a.pl: sp got xxx error, will try to act_xxx
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1 complete
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 2
Customer related infomation for configuration (which will be most interested notes that I want to classify and retreive from all notes):
Customer xxx has created func_xxx to protect their data,
they also perform daily backup of their data by using func_xxx2.
They totally created xxx3 objects in each node...

Notifications IOS apns

I have completed my testing and all notifications are going through nicely using php apns. Now as soon as I switch to production I get this result
Tue, 16 Oct 2012 16:40:48 +0200 ApnsPHP[5709]: INFO: Trying ssl://gateway.push.apple.com:2195...
Tue, 16 Oct 2012 16:40:51 +0200 ApnsPHP[5709]: INFO: Connected to ssl://gateway.push.apple.com:2195.
Tue, 16 Oct 2012 16:40:51 +0200 ApnsPHP[5709]: INFO: Sending messages queue, run #1: 1 message(s) left in queue.
Tue, 16 Oct 2012 16:40:51 +0200 ApnsPHP[5709]: STATUS: Sending message ID 1 [custom identifier: Message-Badge-3] (1/3): 119 bytes.
Tue, 16 Oct 2012 16:40:51 +0200 ApnsPHP[5709]: INFO: Disconnected.
This looks fine to me however my device does not receive the notification.
Please Help
Ensure your DeviceToken is the production one. DeviceTokens for dev are different then production for the same device.
From Apple:
Take note that the device token in the production environment and the device token in the development environment are not the same value.
Source

ConnectionFailure using mongo in rails 3.1

I have an app setup with Rails 3.1, Mongo 1.4.0, Mongoid 2.2.4.
What I am experiencing is this:
Mongo::ConnectionFailure: Failed to connect to a master node at localhost:27017
I've had this problem before, but it went away on a computer restart... this time it does not.
I don't understand, I didn't do anything. I just put my computer in sleep mode, went home and woke it up, then there it was.
Here is the output of sudo mongod
Fri Nov 25 21:47:14 [initandlisten] MongoDB starting : pid=1963 port=27017 dbpath=/data/db/ 64-bit host=xxx.local
Fri Nov 25 21:47:14 [initandlisten] db version v2.0.0, pdfile version 4.5
Fri Nov 25 21:47:14 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222
Fri Nov 25 21:47:14 [initandlisten] build info: Darwin erh2.10gen.cc 9.6.0 Darwin Kernel Version 9.6.0: Mon Nov 24 17:37:00 PST 2008; root:xnu-1228.9.59~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40
Fri Nov 25 21:47:14 [initandlisten] options: {}
Fri Nov 25 21:47:14 [initandlisten] journal dir=/data/db/journal
Fri Nov 25 21:47:14 [initandlisten] recover : no journal files present, no recovery needed
Fri Nov 25 21:47:15 [websvr] admin web console waiting for connections on port 28017
Fri Nov 25 21:47:15 [initandlisten] waiting for connections on port 27017
And I am able to connect with mongoin terminal.
After 2 hours of Googling I hope the competence of SOs community are able to figure this out.
Please, if you need more information about my app-setup just ask.
Thanks!
What you see is that the connection times out... that happens either after a long period of inactivity, or if you put your computer to sleep.
You can change / increase the timeout value, but this way you can't get rid of the connection timing out eventually.
Some MongoDB drivers allow to set :timeout => false , but Mongoid seems to still have problems with that
(see last 3 links in the list)
Hope this helps.
See also:
Mongodb server goes down, how to prevent Rails app from timing out?
MongoDB: What is connection pooling and timeout?
https://github.com/mongodb/mongo-ruby-driver
How can I query mongodb using mongoid/rails without timing out?
http://groups.google.com/group/mongoid/browse_thread/thread/b5c94e7047b42f8a
https://github.com/mongoid/mongoid/issues/455
Try to change localhost to 127.0.0.1!

Connection error using Rails 3.0 and Mongo 1.4.0

I created a library that would record events to MongoDB from my Rails application. I'm using version 1.4.0 of the mongo gem and Rails 3.0 w/Ruby 1.8.7. The relevant code is:
def new_event(collection, event)
#conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
#conn.db("event").collection(collection).insert(event)
#conn.close
end
This has worked fine for recording new events as they happen on the site. However, I also need to backfill the db with old events. So I'm doing running a script that basically does this:
SomeModel.find_each do |model|
Tracker.new.new_event("model_event", { ... info from model ... })
end
I'm trying to backfill something on the order of 50k events. As the script runs, I see this:
Tue Sep 27 23:45:20 [initandlisten] waiting for connections on port 27017
Tue Sep 27 23:46:20 [clientcursormon] mem (MB) res:12 virt:78 mapped:0
Tue Sep 27 23:48:49 [initandlisten] connection accepted from 127.0.0.1:51006 #1
Tue Sep 27 23:49:03 [conn1] remove event.application 103ms
Tue Sep 27 23:49:12 [conn1] remove event.listing 127ms
Tue Sep 27 23:49:20 [clientcursormon] mem (MB) res:37 virt:207 mapped:128
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48103 #2
Tue Sep 27 23:51:44 [conn2] end connection 127.0.0.1:48103
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48104 #3
Tue Sep 27 23:51:44 [conn3] end connection 127.0.0.1:48104
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48105 #4
Tue Sep 27 23:51:44 [conn4] end connection 127.0.0.1:48105
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48106 #5
Tue Sep 27 23:51:44 [conn5] end connection 127.0.0.1:48106
The ports (127.0.0.1:XXXXX) and (what I assume are) the Connection Pool #s keep incrementing, until eventually I get this exception from the ruby script:
Failed to connect to a master node at localhost:27017
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:526:in `connect'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:688:in `setup'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:104:in `initialize'
Just found the solution. I needed to make the connection object a class variable so it was shared across all instances of the Tracker class.
##conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
def self.new_event(collection, event)
##conn.db("event").collection(collection).insert(event)
end

Resources