I created a library that would record events to MongoDB from my Rails application. I'm using version 1.4.0 of the mongo gem and Rails 3.0 w/Ruby 1.8.7. The relevant code is:
def new_event(collection, event)
#conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
#conn.db("event").collection(collection).insert(event)
#conn.close
end
This has worked fine for recording new events as they happen on the site. However, I also need to backfill the db with old events. So I'm doing running a script that basically does this:
SomeModel.find_each do |model|
Tracker.new.new_event("model_event", { ... info from model ... })
end
I'm trying to backfill something on the order of 50k events. As the script runs, I see this:
Tue Sep 27 23:45:20 [initandlisten] waiting for connections on port 27017
Tue Sep 27 23:46:20 [clientcursormon] mem (MB) res:12 virt:78 mapped:0
Tue Sep 27 23:48:49 [initandlisten] connection accepted from 127.0.0.1:51006 #1
Tue Sep 27 23:49:03 [conn1] remove event.application 103ms
Tue Sep 27 23:49:12 [conn1] remove event.listing 127ms
Tue Sep 27 23:49:20 [clientcursormon] mem (MB) res:37 virt:207 mapped:128
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48103 #2
Tue Sep 27 23:51:44 [conn2] end connection 127.0.0.1:48103
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48104 #3
Tue Sep 27 23:51:44 [conn3] end connection 127.0.0.1:48104
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48105 #4
Tue Sep 27 23:51:44 [conn4] end connection 127.0.0.1:48105
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48106 #5
Tue Sep 27 23:51:44 [conn5] end connection 127.0.0.1:48106
The ports (127.0.0.1:XXXXX) and (what I assume are) the Connection Pool #s keep incrementing, until eventually I get this exception from the ruby script:
Failed to connect to a master node at localhost:27017
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:526:in `connect'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:688:in `setup'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:104:in `initialize'
Just found the solution. I needed to make the connection object a class variable so it was shared across all instances of the Tracker class.
##conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
def self.new_event(collection, event)
##conn.db("event").collection(collection).insert(event)
end
Related
I'm conducting an operation test of eap-aka certification with FreeRADIUS 4.
I want to set EAP-Request / AKA-Identity to return AT_PERMANENT_ID_REQ (the value set in FreeRadius is Permanent-Id-Req),
but in the case of Permanent-Id-Req, even if the specified value is set I get an error.
If I set Any-Id-Req and FullAuth-Id-Req, both values are returned successfully.
Set value
[OK] request_identity = Any-Id-Req
[OK] request_identity = FullAuth-Id-Req
[Failure] request_identity = Permanent-Id-Req
What settings should I make to ensure that the Permanent-Id-Req also returns the value correctly?
The failure log is below.
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: request_identity {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: strip_permanent_identity_hint {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: ephemeral_id_length
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: protected_success {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: eap-aka {
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: request_identity = Permanent-Id-Req
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[159]: Invalid value "Permanent-Id-Req". Expected one of 'Any-Id-Req', 'FullAuth-Id-Req', 'Init', 'Permanent-Id-Req', 'no', 'none'
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Failed evaluating configuration for module "process_eap_aka"
Thanks for your help!
I am looking for a model for text classification for our log notes analytics.
The challenge is that each note may contain none 'natural language' texts. For example, some notes are thread backtrace output with symbols, some notes are logging information from source code. And among these notes, some notes with description of how customer is using our product are the ones we want to classify.
Is there any ML model or approach that I could use for this text classification?
Below are some examples for different notes (I changed some content so no company confidential materials are shown):
backtrace info developer pasted for bug analysis:
func118 4563453 344 = SYSTEM_FUNC_1 0x00000efa34343 0x0000000009f333a0 0xffe3ebdfd700 <<<<<
Total of 1 API working thread(s)
(gdb) thread find 0x123456
Thread 670 has target id 'Thread 0x123456 (LWP 443)'
(gdb) t 670
[Switching to thread 670 (Thread 0x123456 (LWP 443))]
#0 0x35353453563abcd in __lock_func1_ ()
from /disks/folder1/xxx/xxx_folder1/info_folder/info2_dir/lib64/libpthread.so.0
(gdb) ebt
#0 __lock_func1_()
#1 _LOCK_F_10()
#2 func_mod_4()
#3 func_mod_5()
#4 ModCon::disconnect()
#5 ModCon::abort()
#6 ModServ::disconnect()
#7 ModServManager::disconnect()
#8 mod1::func1()
#9 mod1::func2()
Product log for issue analysis:
cpu/MOD/MOD2/log/
start_mod.log:
Thu Dec 24 00:01:12 UTC 2019 FUN: HG: FILE_A: stopping
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: stopping, timeout -22-
Thu Dec 24 00:01:12 UTC 2019 system-state: cleared FILE_A_start_complete
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: run thread still running: con_b.pl FUN_run 0
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: calling con_b.pl FUN_cleanup 0, time left: -160-
Thu Dec 24 00:01:12 2019 cli: con_a.pl: FUN_cleanup for FILE_A
Thu Dec 24 00:01:12 2019 cmd: con_a.pl: sp got xxx error, will try to act_xxx
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1 complete
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 2
Customer related infomation for configuration (which will be most interested notes that I want to classify and retreive from all notes):
Customer xxx has created func_xxx to protect their data,
they also perform daily backup of their data by using func_xxx2.
They totally created xxx3 objects in each node...
I have an embedded application running NodeMCU that is not connected to a console as the UART has been repurposed to obtain serial data from an attached device.
During testing the application ran for about 15 hours then rebooted 5 times in a row before "settling" and continuing to run correctly.
Is it possible to log to a file a traceback of what caused the reboots? I am assuming some kind of PANIC error caused the reboot. I don't think it is a memory issue as the application reports the heap size (via http to a local server) every 30 seconds. Here is a log extract:
Wed May 18 00:46:37 2016 -> '{"s":"1782","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:08 2016 -> '{"s":"1783","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:39 2016 -> '{"s":"1784","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:48:19 2016 -> '{"s":"1785","i":"1afe34d26348", "d":"heap=11432
Wed May 18 00:50:06 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:51:25 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:52:45 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:54:04 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:55:24 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14608
Wed May 18 00:55:55 2016 -> '{"s":"1","i":"1afe34d26348", "d":"heap=12608
Wed May 18 00:56:26 2016 -> '{"s":"2","i":"1afe34d26348", "d":"heap=12600
Wed May 18 00:56:56 2016 -> '{"s":"3","i":"1afe34d26348", "d":"heap=12624
Wed May 18 00:57:27 2016 -> '{"s":"4","i":"1afe34d26348", "d":"heap=12600
I the above log "s" is a sequential counter that is reset to 0 when the device reboots, and "d" is the heap size (you can ignore the "i" entry, it is just the MAC address of the device that is sending the data).
xpcall won't work in the case of a PANIC device reset.
I tried logging node.bootreason() to a file on reboot, but it doesn't contain a traceback to where the error occurred.
Is there some method for troubleshooting nodemcu applications that aren't connected to a console?
I have copied over all TokuMX 1.4 data files to a fresh installed TokuMX 1.5 server, but launching the server fails with:
Fri Aug 1 09:51:04.633 [initandlisten] TokuMX starting : pid=42210 port=27017 dbpath=/data/db 64-bit host=beagle.massive-insights.com
Fri Aug 1 09:51:04.633 [initandlisten] TokuMX mongod server v1.5.0-mongodb-2.4.10, using TokuKV rev 479eed747982601fa52e4c4e4b9b4be18f58d3c1
Fri Aug 1 09:51:04.633 [initandlisten] git version: 3c686d0b09d6dfb9fd54da440247d3075fcfd0ac
Fri Aug 1 09:51:04.633 [initandlisten] build info: Linux a5f9a8a9a9af 3.11.0-20-generic #35-Ubuntu SMP Fri May 2 21:32:49 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux BOOST_LIB_VERSION=1_49
Fri Aug 1 09:51:04.633 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/data/db", logFlushPeriod: 300, logappend: "true", logpath: "/var/log/mongodb/mongodb.log", maxConns: 20000 }
Fri Aug 1 09:51:04.634 [initandlisten] [tokumx] startup
Cannot upgrade TokuDB version 25 database. Previous improper shutdown detected.
Fri Aug 1 09:51:04.661 [initandlisten] Assertion: 16767:Unhandled ydb error: -100011
0xb3b123 0x80c91b 0x8061f0 0x8069df 0x8071fc 0x749e7a 0x74a558 0x735caa 0x7f5a97b5ceed 0x746e79
/usr/local/bin/mongod(_ZN5mongo15printStackTraceERSo+0x23) [0xb3b123]
/usr/local/bin/mongod(_ZN5mongo7storage21MsgAssertionExceptionC2EiRKSs+0x9b) [0x80c91b]
/usr/local/bin/mongod(_ZN5mongo7storage16handle_ydb_errorEi+0x390) [0x8061f0]
/usr/local/bin/mongod(_ZN5mongo7storage22handle_ydb_error_fatalEi+0xf) [0x8069df]
/usr/local/bin/mongod(_ZN5mongo7storage7startupEPNS_16TxnCompleteHooksEPNS0_14UpdateCallbackE+0x5bc) [0x8071fc]
/usr/local/bin/mongod(_ZN5mongo14_initAndListenEi+0x34a) [0x749e7a]
/usr/local/bin/mongod(_ZN5mongo13initAndListenEi+0x18) [0x74a558]
/usr/local/bin/mongod(main+0x29a) [0x735caa]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f5a97b5ceed]
/usr/local/bin/mongod() [0x746e79]
Fri Aug 1 09:51:04.670 [initandlisten] fatal error 16767: Unhandled ydb error: -100011
Fri Aug 1 09:51:04.670 [initandlisten] 16767 Unhandled ydb error: -100011
Fri Aug 1 09:51:04.670 [initandlisten] Fatal Assertion 16767
0xb3b123 0x9e654c 0x806bc6 0x8071fc 0x749e7a 0x74a558 0x735caa 0x7f5a97b5ceed 0x746e79
/usr/local/bin/mongod(_ZN5mongo15printStackTraceERSo+0x23) [0xb3b123]
/usr/local/bin/mongod(_ZN5mongo13fassertFailedEi+0x4c) [0x9e654c]
/usr/local/bin/mongod(_ZN5mongo7storage22handle_ydb_error_fatalEi+0x1f6) [0x806bc6]
/usr/local/bin/mongod(_ZN5mongo7storage7startupEPNS_16TxnCompleteHooksEPNS0_14UpdateCallbackE+0x5bc) [0x8071fc]
/usr/local/bin/mongod(_ZN5mongo14_initAndListenEi+0x34a) [0x749e7a]
/usr/local/bin/mongod(_ZN5mongo13initAndListenEi+0x18) [0x74a558]
/usr/local/bin/mongod(main+0x29a) [0x735caa]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f5a97b5ceed]
/usr/local/bin/mongod() [0x746e79]
Fri Aug 1 09:51:04.677 [initandlisten]
How can I go about the migration path 1.4 to 1.5, and how to deal with that error above?
As the log file states, "Cannot upgrade TokuDB version 25 database. Previous improper shutdown detected."
TokuMX does not support upgrades (meaning the file format has changed) unless the files you are using from the prior version came from a cleanly shutdown TokuMX.
You need to cleanly shutdown your 1.4 server, then copy or re-use the data files with 1.5.
I have an app setup with Rails 3.1, Mongo 1.4.0, Mongoid 2.2.4.
What I am experiencing is this:
Mongo::ConnectionFailure: Failed to connect to a master node at localhost:27017
I've had this problem before, but it went away on a computer restart... this time it does not.
I don't understand, I didn't do anything. I just put my computer in sleep mode, went home and woke it up, then there it was.
Here is the output of sudo mongod
Fri Nov 25 21:47:14 [initandlisten] MongoDB starting : pid=1963 port=27017 dbpath=/data/db/ 64-bit host=xxx.local
Fri Nov 25 21:47:14 [initandlisten] db version v2.0.0, pdfile version 4.5
Fri Nov 25 21:47:14 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222
Fri Nov 25 21:47:14 [initandlisten] build info: Darwin erh2.10gen.cc 9.6.0 Darwin Kernel Version 9.6.0: Mon Nov 24 17:37:00 PST 2008; root:xnu-1228.9.59~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40
Fri Nov 25 21:47:14 [initandlisten] options: {}
Fri Nov 25 21:47:14 [initandlisten] journal dir=/data/db/journal
Fri Nov 25 21:47:14 [initandlisten] recover : no journal files present, no recovery needed
Fri Nov 25 21:47:15 [websvr] admin web console waiting for connections on port 28017
Fri Nov 25 21:47:15 [initandlisten] waiting for connections on port 27017
And I am able to connect with mongoin terminal.
After 2 hours of Googling I hope the competence of SOs community are able to figure this out.
Please, if you need more information about my app-setup just ask.
Thanks!
What you see is that the connection times out... that happens either after a long period of inactivity, or if you put your computer to sleep.
You can change / increase the timeout value, but this way you can't get rid of the connection timing out eventually.
Some MongoDB drivers allow to set :timeout => false , but Mongoid seems to still have problems with that
(see last 3 links in the list)
Hope this helps.
See also:
Mongodb server goes down, how to prevent Rails app from timing out?
MongoDB: What is connection pooling and timeout?
https://github.com/mongodb/mongo-ruby-driver
How can I query mongodb using mongoid/rails without timing out?
http://groups.google.com/group/mongoid/browse_thread/thread/b5c94e7047b42f8a
https://github.com/mongoid/mongoid/issues/455
Try to change localhost to 127.0.0.1!