Wispr-Location-Id and Wispr-Location-Name saved when accounting - freeradius

I have several Hotspots around the city, each one with different Wispr-Location-Id and Wispr-Location-Name. All these Hotspots use the same Radius server and share the same database.
Is there any way when accounting message is received to save this two parameters (Wispr-Location-Id and Wispr-Location-Name)
I need to know which clients roams from one Hotspot to another.
Thanks!

You'll need to edit the queries specific to your database dialect:
They can be found at /etc/(raddb|freeradius)/sql/<dialect>/dialup.conf
https://github.com/FreeRADIUS/freeradius-server/blob/v2.x.x/raddb/sql/mysql/dialup.conf#L163
You'll need to add the additional fields and values to the following queries:
- accounting_start_query
- accounting_stop_query
- accounting_stop_query_alt
an example of a modified accounting_start_query for MySQL would be:
accounting_start_query = " \
INSERT INTO ${acct_table1} \
(acctsessionid, acctuniqueid, username, \
realm, nasipaddress, nasportid, \
nasporttype, acctstarttime, acctstoptime, \
acctsessiontime, acctauthentic, connectinfo_start, \
connectinfo_stop, acctinputoctets, acctoutputoctets, \
calledstationid, callingstationid, acctterminatecause, \
servicetype, framedprotocol, framedipaddress, \
acctstartdelay, acctstopdelay, xascendsessionsvrkey,
wisprlocationid, wisprlocationname) \
VALUES \
('%{Acct-Session-Id}', '%{Acct-Unique-Session-Id}', \
'%{SQL-User-Name}', \
'%{Realm}', '%{NAS-IP-Address}', '%{NAS-Port}', \
'%{NAS-Port-Type}', '%S', NULL, \
'0', '%{Acct-Authentic}', '%{Connect-Info}', \
'', '0', '0', \
'%{Called-Station-Id}', '%{Calling-Station-Id}', '', \
'%{Service-Type}', '%{Framed-Protocol}', '%{Framed-IP-Address}', \
'%{%{Acct-Delay-Time}:-0}', '0', '%{X-Ascend-Session-Svr-Key}',\
'%{WISPR-Location-Name}', '%{WISPR-Location-ID}')"
You'll also need to add additional string type columns in the radacct table to hold the additional values.

Related

Eventarc triggers for crossproject

I have created a cloud run service. My event arc is not triggering the cross project to read the data. How to give the event filter for resource name in event arc with insert job/Job completed to trigger to BQ table.
gcloud eventarc triggers create ${SERVICE}-test1\
--location=${REGION} --service-account ${SVC_ACCOUNT} \
--destination-run-service ${SERVICE} \
--destination-run-region=${REGION} \
--event-filters type=google.cloud.audit.log.v1.written \
--event-filters methodName=google.cloud.bigquery.v2.JobService.InsertJob \
--event-filters serviceName=bigquery.googleapis.com \
--event-filters-path-pattern resourceName="/projects/destinationproject/locations/us-central1/jobs/*"
I have tried multiple options giving the resource name like:
"projects/projectname/datasets/outputdataset/tables/outputtable"

How to use a custom dataset for T5X?

I've created a custom seqio task and added it to the TaskRegistry following the instruction per the documentation. When I set the gin parameters, accounting for the new task I've created, I receive an error that says my task does not exist.
No Task or Mixture found with name [my task name]. Available:
Am I using the correct Mixture/Task module that needs to be imported? If not, what is the correct statement that would allow me to use my custom task?
--gin.MIXTURE_OR_TASK_MODULE=\"t5.data.tasks\"
Here is the full eval script I am using.
python3 t5x/eval.py \
--gin_file=t5x/examples/t5/t5_1_0/11B.gin \
--gin_file=t5x/configs/runs/eval.gin \
--gin.MIXTURE_OR_TASK_NAME=\"task_name\" \
--gin.MIXTURE_OR_TASK_MODULE=\"t5.data.tasks\" \
--gin.partitioning.PjitPartitioner.num_partitions=8 \
--gin.utils.DatasetConfig.split=\"test\" \
--gin.DROPOUT_RATE=0.0 \
--gin.CHECKPOINT_PATH=\"${CHECKPOINT_PATH}\" \
--gin.EVAL_OUTPUT_DIR=\"${EVAL_OUTPUT_DIR}\"

Neo4J Bloom doesn't show properties or categories

So I load data with python using this code:
from neo4j import GraphDatabase
driver = GraphDatabase.driver("connection", auth=("neo4j", "password"))
def add_data(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/DataMap.csv' AS Map \
MERGE (source {node_name: Map.source}) \
MERGE (destination {node_name: Map.destination}) \
CREATE (source)-[:FEEDS_INTO]->(destination)")
def add_other(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/Data.csv' AS Data \
Match (node {node_name: Data.dane}) \
MERGE (system {system_name: Data.system}) \
MERGE (scope {scope_name: Data.scope}) \
MERGE (process {process_name: Data.process}) \
MERGE (owner {owner_name: Data.owner}) \
MERGE (node)-[:UNDER_SYSTEM]->(system) \
MERGE (system)-[:UNDER_SCOPE]->(scope) \
MERGE (node)-[:HAS_PROCESS]->(process) \
MERGE (owner)-[:IS_OWNER_OF]->(node) ")
def add_data_properties(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/Data.csv' AS Data \
MATCH (n {node_name: Data.dane }) \
SET n.system = Data.system \
SET n.scope = Data.scope \
SET n.process = Data.process \
SET n.owner = Data.owner")
with driver.session() as session:
session.write_transaction(add_data)
session.write_transaction(add_other)
session.write_transaction(add_data_properties)
driver.close()
When I use Neo4J browser everythink is ok. I got nodes with properties and relations. The problem is when I launch it in Neo4J Bloom. I use option to generate perspective and I don't have any properties in nodes, nodes shows only with and I can only add one category "node" which doesn't show anything.
Example from Browser:
Example from Bloom:
Cypher code I use for finding all nodes and connections:
Match (n)-[r]->(m)
Return n,r,m
Ok nvm. I just changed code to this:
def add_data(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/DataMap.csv' AS Map \
MERGE (source:Source {node_name: Map.source}) \
MERGE (destination:Destination {node_name: Map.destination}) \
CREATE (source)-[:FEEDS_INTO]->(destination)")
def add_other(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/Data.csv' AS Data \
Match (node {node_name: Data.dane}) \
MERGE (system:System {system_name: Data.system}) \
MERGE (scope:Scope {scope_name: Data.scope}) \
MERGE (process:Process {process_name: Data.process}) \
MERGE (owner:Owner {owner_name: Data.owner}) \
MERGE (node)-[:UNDER_SYSTEM]->(system) \
MERGE (system)-[:UNDER_SCOPE]->(scope) \
MERGE (node)-[:HAS_PROCESS]->(process) \
MERGE (owner)-[:IS_OWNER_OF]->(node) ")

Updating all active twilio Flow to ended error

So I want to run this pythong script to end all active Flows currently executing
executions = client.studio \
.v1 \
.flows('FWXXXXXX') \
.executions \
.list(limit=20)
for record in executions:
if record.status == 'active':
execution = client.studio \
.flows('FWXXXXXX') \
.executions(record.sid) \
.update(status='ended')
print('ending', execution)
I get this error "'ExecutionContext' object has no attribute 'update'"
I want to end a Twilio flow, but the documentation on the twilio website does not work for me in this case.
Twilio developer evangelist here.
Since you have already collected the executions when you list them from the API, you should be able to use that object to update the status. Try this:
executions = client.studio \
.v1 \
.flows('FWXXXXXX') \
.executions \
.list(limit=20)
for record in executions:
if record.status == 'active':
record.update(status='ended')
print('ending', record)

Cant access PFUsers from Parse

I am new to parse and ios development.
I want to use Parse for User management of an IOS app. With the help of documentation (https://parse.com/docs/ios/guide#users-signing-up) I was able to signup some sample users and I have the following in Parse app.
Then I want to retrieve all users (or query for specific) with following documentation help.
var query = PFUser.query()
query.whereKey("gender", equalTo:"female")
var girls = query.findObjects()
I was expected to receive an array of size 3 but surprisingly didn't receive any.
Later I figured out I can user API console feature of Parse and tried use it to receive PFUser objects. I received zero results.
Later I tried with sample table and I was successfully add, retrieve Objects to the table.
Not sure If I need to anything special for me to use PFUser.
I have tested using API Console and its working on my side. I have one entry in User table having gender = male.
I fired following curl query to retry all the users
curl -X GET \
-H "X-Parse-Application-Id: <#your-application-key>" \
-H "X-Parse-REST-API-Key: <#your-rest-api-key>" \
-G \
https://api.parse.com/1/users
Result from the above query is:
{"results":[{"gender":"male","createdAt":"2015-12-10T06:00:40.368Z","email":"sam07it22#gmail.com","objectId":"1KbxpYgeUb","updatedAt":"2015-12-10T06:01:32.885Z","username":"sam07it22#gmail.com"}]}
Now, I'm going to add Condition in above query to fetch records having gender = female. As per data expected result should have zero records
For gender = female
curl -X GET \
-H "X-Parse-Application-Id: <#your-application-key>" \
-H "X-Parse-REST-API-Key: <#your-rest-api-key>" \
-G \
--data-urlencode 'where={"gender":"female"}' \
https://api.parse.com/1/users
Output:
{"results":[]}
For gender = male
curl -X GET \
-H "X-Parse-Application-Id: <#your-application-key>" \
-H "X-Parse-REST-API-Key: <#your-rest-api-key>" \
-G \
--data-urlencode 'where={"gender":"male"}' \
https://api.parse.com/1/users
Output:
{"results":[{"gender":"male","createdAt":"2015-12-10T06:00:40.368Z","email":"sam07it22#gmail.com","objectId":"1KbxpYgeUb","updatedAt":"2015-12-10T06:01:32.885Z","username":"sam07it22#gmail.com"}]}
NOTE:
Don't forget to replace your API KEY and REST API KEY
ACL permissions has done the trick. ACL permissions has to be "public read" for them to be accessible. Here is more info about ACL in parse https://parse.com/docs/ios/guide#security-object-level-access-control

Resources