FreeRADIUS 4 : EAP-Request failure - freeradius

I'm conducting an operation test of eap-aka certification with FreeRADIUS 4.
I want to set EAP-Request / AKA-Identity to return AT_PERMANENT_ID_REQ (the value set in FreeRadius is Permanent-Id-Req),
but in the case of Permanent-Id-Req, even if the specified value is set I get an error.
If I set Any-Id-Req and FullAuth-Id-Req, both values are returned successfully.
Set value
[OK] request_identity = Any-Id-Req
[OK] request_identity = FullAuth-Id-Req
[Failure] request_identity = Permanent-Id-Req
What settings should I make to ensure that the Permanent-Id-Req also returns the value correctly?
The failure log is below.
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: request_identity {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: strip_permanent_identity_hint {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: ephemeral_id_length
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Pushed parse rule to eap-aka section: protected_success {}
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: eap-aka {
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: request_identity = Permanent-Id-Req
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[159]: Invalid value "Permanent-Id-Req". Expected one of 'Any-Id-Req', 'FullAuth-Id-Req', 'Init', 'Permanent-Id-Req', 'no', 'none'
Tue Apr 26 22:21:48 2022: /usr/local/etc/raddb/sites-enabled/eap-aka-sim[137]: Failed evaluating configuration for module "process_eap_aka"
Thanks for your help!

Related

Get current week Sunday date from todays date using Google Sheets/Excel

I need to get the Current Sunday date from todays date.
I have tried this but it gives previous week Sunday, expected date is 11 Oct 2020 (today is 13 Oct 2020)
Here are Today Vs Expected Values
Formula that uses this Date -> Result
10 Oct -> 04 Oct
11 Oct -> 11 Oct
12 Oct -> 11 Oct
13 Oct -> 11 Oct
14 Oct -> 11 Oct
15 Oct -> 11 Oct
16 Oct -> 11 Oct
17 Oct -> 11 Oct
18 Oct -> 18 Oct
19 Oct -> 18 Oct
=TODAY() - (WEEKDAY(TODAY()) - 1) - 7
With dates in column A, in B1 enter:
=A1-(WEEKDAY(A1)-1)
I have tried this but it gives previous week Sunday, expected date is
11 Oct 2020 (today is 13 Oct 2020)
You don't need -7. Simply try:
=TODAY() - (WEEKDAY(TODAY()) - 1)
and as the other answer suggests, replace Today() with the desired dates.

How to build up a ML model for text classification which contains none 'natural language'?

I am looking for a model for text classification for our log notes analytics.
The challenge is that each note may contain none 'natural language' texts. For example, some notes are thread backtrace output with symbols, some notes are logging information from source code. And among these notes, some notes with description of how customer is using our product are the ones we want to classify.
Is there any ML model or approach that I could use for this text classification?
Below are some examples for different notes (I changed some content so no company confidential materials are shown):
backtrace info developer pasted for bug analysis:
func118 4563453 344 = SYSTEM_FUNC_1 0x00000efa34343 0x0000000009f333a0 0xffe3ebdfd700 <<<<<
Total of 1 API working thread(s)
(gdb) thread find 0x123456
Thread 670 has target id 'Thread 0x123456 (LWP 443)'
(gdb) t 670
[Switching to thread 670 (Thread 0x123456 (LWP 443))]
#0 0x35353453563abcd in __lock_func1_ ()
from /disks/folder1/xxx/xxx_folder1/info_folder/info2_dir/lib64/libpthread.so.0
(gdb) ebt
#0 __lock_func1_()
#1 _LOCK_F_10()
#2 func_mod_4()
#3 func_mod_5()
#4 ModCon::disconnect()
#5 ModCon::abort()
#6 ModServ::disconnect()
#7 ModServManager::disconnect()
#8 mod1::func1()
#9 mod1::func2()
Product log for issue analysis:
cpu/MOD/MOD2/log/
start_mod.log:
Thu Dec 24 00:01:12 UTC 2019 FUN: HG: FILE_A: stopping
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: stopping, timeout -22-
Thu Dec 24 00:01:12 UTC 2019 system-state: cleared FILE_A_start_complete
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: run thread still running: con_b.pl FUN_run 0
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: calling con_b.pl FUN_cleanup 0, time left: -160-
Thu Dec 24 00:01:12 2019 cli: con_a.pl: FUN_cleanup for FILE_A
Thu Dec 24 00:01:12 2019 cmd: con_a.pl: sp got xxx error, will try to act_xxx
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1 complete
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 2
Customer related infomation for configuration (which will be most interested notes that I want to classify and retreive from all notes):
Customer xxx has created func_xxx to protect their data,
they also perform daily backup of their data by using func_xxx2.
They totally created xxx3 objects in each node...

Removing lines from files having duplicate date time values.

I am working on parsing a file as follows. I see a duplicate line for "ver."=3 and 5 because there is only 1 second difference (Mon Jan 15 08:24:02 vs Mon Jan 15 08:24:03) software prints it twice.
Also, of you see line for "Ver."5 "Complete Time" has one second difference. I would like delete lines which has rest of the fields matching in the line except "Time" or "Complete Time" column has difference of 1 or 2 seconds.
Loc ID Img Name Ver. Time Complete Time
------------------------------------------------------------------------------------------
ssfad_fs TINT_PAP_1516048511 0 Mon Jan 15 20:35:13 2018 NA
ssfad_fs sfad_jpg 1 Mon Jan 15 18:24:02 2018 Wed Jan 17 18:24:02 2018
ssfad_fs sfad_jpg 1 Mon Jan 15 16:24:02 2018 Wed Jan 17 16:24:02 2018
ssfad_fs sfad_jpg 2 Mon Jan 15 12:24:03 2018 Wed Jan 17 12:24:02 2018
ssfad_fs sfad_jpg 3 Mon Jan 15 08:24:02 2018 Wed Jan 17 08:24:02 2018
ssfad_fs sfad_jpg 3 Mon Jan 15 08:24:03 2018 Wed Jan 17 08:24:02 2018
ssfad_fs sfad_jpg 4 Mon Jan 15 04:24:02 2018 Wed Jan 17 04:24:02 2018
ssfad_fs sfad_jpg 5 Mon Jan 15 00:24:03 2018 Wed Jan 17 00:24:59 2018
ssfad_fs sfad_jpg 5 Mon Jan 15 00:24:03 2018 Wed Jan 17 00:25:00 2018
ssfad_fs sfad_jpg 6 Sun Jan 14 20:24:03 2018 Tue Jan 16 20:24:02 2018
Expected O/P
Loc ID Img Name Ver. Time Complete Time
------------------------------------------------------------------------------------------
ssfad_fs TINT_PAP_1516048511 0 Mon Jan 15 20:35:13 2018 NA
ssfad_fs sfad_jpg 1 Mon Jan 15 18:24:02 2018 Wed Jan 17 18:24:02 2018
ssfad_fs sfad_jpg 1 Mon Jan 15 16:24:02 2018 Wed Jan 17 16:24:02 2018
ssfad_fs sfad_jpg 2 Mon Jan 15 12:24:03 2018 Wed Jan 17 12:24:02 2018
ssfad_fs sfad_jpg 3 Mon Jan 15 08:24:02 2018 Wed Jan 17 08:24:02 2018
ssfad_fs sfad_jpg 4 Mon Jan 15 04:24:02 2018 Wed Jan 17 04:24:02 2018
ssfad_fs sfad_jpg 5 Mon Jan 15 00:24:03 2018 Wed Jan 17 00:24:59 2018
ssfad_fs sfad_jpg 6 Sun Jan 14 20:24:03 2018 Tue Jan 16 20:24:02 2018
When I try #cat junk1.jnk |sort -uk 3,3 command it deletes third line as well which has same Ver.1 number but different times. I want to keep that line. Please help.

NodeMCU traceback on reboot

I have an embedded application running NodeMCU that is not connected to a console as the UART has been repurposed to obtain serial data from an attached device.
During testing the application ran for about 15 hours then rebooted 5 times in a row before "settling" and continuing to run correctly.
Is it possible to log to a file a traceback of what caused the reboots? I am assuming some kind of PANIC error caused the reboot. I don't think it is a memory issue as the application reports the heap size (via http to a local server) every 30 seconds. Here is a log extract:
Wed May 18 00:46:37 2016 -> '{"s":"1782","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:08 2016 -> '{"s":"1783","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:39 2016 -> '{"s":"1784","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:48:19 2016 -> '{"s":"1785","i":"1afe34d26348", "d":"heap=11432
Wed May 18 00:50:06 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:51:25 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:52:45 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:54:04 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:55:24 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14608
Wed May 18 00:55:55 2016 -> '{"s":"1","i":"1afe34d26348", "d":"heap=12608
Wed May 18 00:56:26 2016 -> '{"s":"2","i":"1afe34d26348", "d":"heap=12600
Wed May 18 00:56:56 2016 -> '{"s":"3","i":"1afe34d26348", "d":"heap=12624
Wed May 18 00:57:27 2016 -> '{"s":"4","i":"1afe34d26348", "d":"heap=12600
I the above log "s" is a sequential counter that is reset to 0 when the device reboots, and "d" is the heap size (you can ignore the "i" entry, it is just the MAC address of the device that is sending the data).
xpcall won't work in the case of a PANIC device reset.
I tried logging node.bootreason() to a file on reboot, but it doesn't contain a traceback to where the error occurred.
Is there some method for troubleshooting nodemcu applications that aren't connected to a console?

Connection error using Rails 3.0 and Mongo 1.4.0

I created a library that would record events to MongoDB from my Rails application. I'm using version 1.4.0 of the mongo gem and Rails 3.0 w/Ruby 1.8.7. The relevant code is:
def new_event(collection, event)
#conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
#conn.db("event").collection(collection).insert(event)
#conn.close
end
This has worked fine for recording new events as they happen on the site. However, I also need to backfill the db with old events. So I'm doing running a script that basically does this:
SomeModel.find_each do |model|
Tracker.new.new_event("model_event", { ... info from model ... })
end
I'm trying to backfill something on the order of 50k events. As the script runs, I see this:
Tue Sep 27 23:45:20 [initandlisten] waiting for connections on port 27017
Tue Sep 27 23:46:20 [clientcursormon] mem (MB) res:12 virt:78 mapped:0
Tue Sep 27 23:48:49 [initandlisten] connection accepted from 127.0.0.1:51006 #1
Tue Sep 27 23:49:03 [conn1] remove event.application 103ms
Tue Sep 27 23:49:12 [conn1] remove event.listing 127ms
Tue Sep 27 23:49:20 [clientcursormon] mem (MB) res:37 virt:207 mapped:128
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48103 #2
Tue Sep 27 23:51:44 [conn2] end connection 127.0.0.1:48103
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48104 #3
Tue Sep 27 23:51:44 [conn3] end connection 127.0.0.1:48104
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48105 #4
Tue Sep 27 23:51:44 [conn4] end connection 127.0.0.1:48105
Tue Sep 27 23:51:44 [initandlisten] connection accepted from 127.0.0.1:48106 #5
Tue Sep 27 23:51:44 [conn5] end connection 127.0.0.1:48106
The ports (127.0.0.1:XXXXX) and (what I assume are) the Connection Pool #s keep incrementing, until eventually I get this exception from the ruby script:
Failed to connect to a master node at localhost:27017
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:526:in `connect'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:688:in `setup'
/var/bundler/turtle/ruby/1.8/gems/mongo-1.4.0/lib/../lib/mongo/connection.rb:104:in `initialize'
Just found the solution. I needed to make the connection object a class variable so it was shared across all instances of the Tracker class.
##conn = Mongo::Connection.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5)
def self.new_event(collection, event)
##conn.db("event").collection(collection).insert(event)
end

Resources