Neo4J Bloom doesn't show properties or categories - neo4j

So I load data with python using this code:
from neo4j import GraphDatabase
driver = GraphDatabase.driver("connection", auth=("neo4j", "password"))
def add_data(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/DataMap.csv' AS Map \
MERGE (source {node_name: Map.source}) \
MERGE (destination {node_name: Map.destination}) \
CREATE (source)-[:FEEDS_INTO]->(destination)")
def add_other(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/Data.csv' AS Data \
Match (node {node_name: Data.dane}) \
MERGE (system {system_name: Data.system}) \
MERGE (scope {scope_name: Data.scope}) \
MERGE (process {process_name: Data.process}) \
MERGE (owner {owner_name: Data.owner}) \
MERGE (node)-[:UNDER_SYSTEM]->(system) \
MERGE (system)-[:UNDER_SCOPE]->(scope) \
MERGE (node)-[:HAS_PROCESS]->(process) \
MERGE (owner)-[:IS_OWNER_OF]->(node) ")
def add_data_properties(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/Data.csv' AS Data \
MATCH (n {node_name: Data.dane }) \
SET n.system = Data.system \
SET n.scope = Data.scope \
SET n.process = Data.process \
SET n.owner = Data.owner")
with driver.session() as session:
session.write_transaction(add_data)
session.write_transaction(add_other)
session.write_transaction(add_data_properties)
driver.close()
When I use Neo4J browser everythink is ok. I got nodes with properties and relations. The problem is when I launch it in Neo4J Bloom. I use option to generate perspective and I don't have any properties in nodes, nodes shows only with and I can only add one category "node" which doesn't show anything.
Example from Browser:
Example from Bloom:
Cypher code I use for finding all nodes and connections:
Match (n)-[r]->(m)
Return n,r,m

Ok nvm. I just changed code to this:
def add_data(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/DataMap.csv' AS Map \
MERGE (source:Source {node_name: Map.source}) \
MERGE (destination:Destination {node_name: Map.destination}) \
CREATE (source)-[:FEEDS_INTO]->(destination)")
def add_other(tx):
tx.run("LOAD CSV WITH HEADERS FROM 'file:///C:/Users/Damian/PycharmProjects/NeoJ/Data.csv' AS Data \
Match (node {node_name: Data.dane}) \
MERGE (system:System {system_name: Data.system}) \
MERGE (scope:Scope {scope_name: Data.scope}) \
MERGE (process:Process {process_name: Data.process}) \
MERGE (owner:Owner {owner_name: Data.owner}) \
MERGE (node)-[:UNDER_SYSTEM]->(system) \
MERGE (system)-[:UNDER_SCOPE]->(scope) \
MERGE (node)-[:HAS_PROCESS]->(process) \
MERGE (owner)-[:IS_OWNER_OF]->(node) ")

Related

Eventarc triggers for crossproject

I have created a cloud run service. My event arc is not triggering the cross project to read the data. How to give the event filter for resource name in event arc with insert job/Job completed to trigger to BQ table.
gcloud eventarc triggers create ${SERVICE}-test1\
--location=${REGION} --service-account ${SVC_ACCOUNT} \
--destination-run-service ${SERVICE} \
--destination-run-region=${REGION} \
--event-filters type=google.cloud.audit.log.v1.written \
--event-filters methodName=google.cloud.bigquery.v2.JobService.InsertJob \
--event-filters serviceName=bigquery.googleapis.com \
--event-filters-path-pattern resourceName="/projects/destinationproject/locations/us-central1/jobs/*"
I have tried multiple options giving the resource name like:
"projects/projectname/datasets/outputdataset/tables/outputtable"

How to use a custom dataset for T5X?

I've created a custom seqio task and added it to the TaskRegistry following the instruction per the documentation. When I set the gin parameters, accounting for the new task I've created, I receive an error that says my task does not exist.
No Task or Mixture found with name [my task name]. Available:
Am I using the correct Mixture/Task module that needs to be imported? If not, what is the correct statement that would allow me to use my custom task?
--gin.MIXTURE_OR_TASK_MODULE=\"t5.data.tasks\"
Here is the full eval script I am using.
python3 t5x/eval.py \
--gin_file=t5x/examples/t5/t5_1_0/11B.gin \
--gin_file=t5x/configs/runs/eval.gin \
--gin.MIXTURE_OR_TASK_NAME=\"task_name\" \
--gin.MIXTURE_OR_TASK_MODULE=\"t5.data.tasks\" \
--gin.partitioning.PjitPartitioner.num_partitions=8 \
--gin.utils.DatasetConfig.split=\"test\" \
--gin.DROPOUT_RATE=0.0 \
--gin.CHECKPOINT_PATH=\"${CHECKPOINT_PATH}\" \
--gin.EVAL_OUTPUT_DIR=\"${EVAL_OUTPUT_DIR}\"

Updating all active twilio Flow to ended error

So I want to run this pythong script to end all active Flows currently executing
executions = client.studio \
.v1 \
.flows('FWXXXXXX') \
.executions \
.list(limit=20)
for record in executions:
if record.status == 'active':
execution = client.studio \
.flows('FWXXXXXX') \
.executions(record.sid) \
.update(status='ended')
print('ending', execution)
I get this error "'ExecutionContext' object has no attribute 'update'"
I want to end a Twilio flow, but the documentation on the twilio website does not work for me in this case.
Twilio developer evangelist here.
Since you have already collected the executions when you list them from the API, you should be able to use that object to update the status. Try this:
executions = client.studio \
.v1 \
.flows('FWXXXXXX') \
.executions \
.list(limit=20)
for record in executions:
if record.status == 'active':
record.update(status='ended')
print('ending', record)

Neo4j streams, fabric integration not working. Log reports "The `USE GRAPH` clause is not available in this implementation of Cypher"

integrating my streams topics into fabric functionality is not working.
Attempting to sink my first topic into a named graph produced the message below.
I did follow the instructions provided by links to no avail.
Am I missing someting?
The Neo4j log Error:
ErrorData(originalTopic=twoPoly, timestamp=1620757269838, partition=0, offset=1481, exception=org.neo4j.graphdb.QueryExecutionException: The USE GRAPH clause is not available in this implementation of Cypher due to lack of support for USE graph selector. (line 1, column 29 (offset: 28))
"UNWIND $events AS event use integerpolys MERGE (i:IndexedBy {N:event.NN,RowCounter:event.flatFileRowCounterr,MaxN:event.nMaxx,Dimension:"2"} ) MERGE (t:TwoSeqFactor {twoSeq:event.tSeqDB} ) MERGE (v:VertexNode {Vertex:event.vertexDBVertex,Scalar:event.vertexScalarDB,Degree:event.vertexDegreeDB} ) MERGE (e:Evaluate {Value:event.targetEvaluate}) MERGE (i)-[ee:TwoFactor]->(t) MERGE (i) -[:IndexedByEvaluate]->(e) MERGE (i)-[:VertexIndexedBy]->(v)"
^, key=null, value={"NN":"7","nMaxx":"8","vertexDBVertex":"1 -8 1 0 0","bTermDB":"1","flatFileRowCounterr":"6","targetEvaluate":"128","vertexDB":"1 -8 1 0 0","vertexScalarDB":"-8","tSeqDB":"32","vertexDegreeDB":"1"}, executingClass=class streams.kafka.KafkaAutoCommitEventConsumer)
Neo4j version 4.1.0
Relevant neo4j.conf:
fabric.database.name=differences
streams.source.enabled=false
kafka.max.poll.records=1000
kafka.zookeeper.connect=localhost:2181
kafka.bootstrap.servers=localhost:9092
streams.procedures.enabled=<true/false, default=true>
streams.sink.enabled=true
streams.sink.topic.cypher.twoPoly=use integerpolys \
MERGE (i:IndexedBy {N:event.NN,RowCounter:event.RowCounterr,MaxN:event.nMaxx,Dimension:"2"} ) \
MERGE (t:TwoSeqFactor {twoSeq:event.tSeqDB} ) \
MERGE (v:VertexNode {Vertex:event.vertexDBVertex,Scalar:event.vertexScalarDB,Degree:event.vertexDegreeDB} ) \
MERGE (e:Evaluate {Value:event.targetEvaluate}) \
MERGE (i)-[ee:TwoFactor]->(t) \
MERGE (i) -[:IndexedByEvaluate]->(e) \
MERGE (i)-[:VertexIndexedBy]->(v)
Available databases
Databases available for the current user.
Click on one to start using it:
:use createbymu
:use differencegraph
:use fabric
:use integerpolys
:use neo4j
:use skipmu
:use system
Reference solution:
https://github.com/neo4j/neo4j/issues/12395
https://neo4j.com/docs/operations-manual/current/fabric/configuration/
The USE clause is currently not supported in this setting (only when connected using a neo4j driver).
Remove the use integerpolys from the query and instead configure the streams plugin with the target database directly, according to https://neo4j.com/labs/kafka/4.0/consumer/#_multi_database_support
streams.sink.enabled.to.integerpolys=true
streams.sink.topic.cypher.twoPoly.to.integerpolys=\
MERGE (i:IndexedBy {N:event.NN,RowCounter:event.RowCounterr,MaxN:event.nMaxx,Dimension:"2"} ) \
MERGE (t:TwoSeqFactor {twoSeq:event.tSeqDB} ) \
MERGE (v:VertexNode {Vertex:event.vertexDBVertex,Scalar:event.vertexScalarDB,Degree:event.vertexDegreeDB} ) \
MERGE (e:Evaluate {Value:event.targetEvaluate}) \
MERGE (i)-[ee:TwoFactor]->(t) \
MERGE (i) -[:IndexedByEvaluate]->(e) \
MERGE (i)-[:VertexIndexedBy]->(v)

Wispr-Location-Id and Wispr-Location-Name saved when accounting

I have several Hotspots around the city, each one with different Wispr-Location-Id and Wispr-Location-Name. All these Hotspots use the same Radius server and share the same database.
Is there any way when accounting message is received to save this two parameters (Wispr-Location-Id and Wispr-Location-Name)
I need to know which clients roams from one Hotspot to another.
Thanks!
You'll need to edit the queries specific to your database dialect:
They can be found at /etc/(raddb|freeradius)/sql/<dialect>/dialup.conf
https://github.com/FreeRADIUS/freeradius-server/blob/v2.x.x/raddb/sql/mysql/dialup.conf#L163
You'll need to add the additional fields and values to the following queries:
- accounting_start_query
- accounting_stop_query
- accounting_stop_query_alt
an example of a modified accounting_start_query for MySQL would be:
accounting_start_query = " \
INSERT INTO ${acct_table1} \
(acctsessionid, acctuniqueid, username, \
realm, nasipaddress, nasportid, \
nasporttype, acctstarttime, acctstoptime, \
acctsessiontime, acctauthentic, connectinfo_start, \
connectinfo_stop, acctinputoctets, acctoutputoctets, \
calledstationid, callingstationid, acctterminatecause, \
servicetype, framedprotocol, framedipaddress, \
acctstartdelay, acctstopdelay, xascendsessionsvrkey,
wisprlocationid, wisprlocationname) \
VALUES \
('%{Acct-Session-Id}', '%{Acct-Unique-Session-Id}', \
'%{SQL-User-Name}', \
'%{Realm}', '%{NAS-IP-Address}', '%{NAS-Port}', \
'%{NAS-Port-Type}', '%S', NULL, \
'0', '%{Acct-Authentic}', '%{Connect-Info}', \
'', '0', '0', \
'%{Called-Station-Id}', '%{Calling-Station-Id}', '', \
'%{Service-Type}', '%{Framed-Protocol}', '%{Framed-IP-Address}', \
'%{%{Acct-Delay-Time}:-0}', '0', '%{X-Ascend-Session-Svr-Key}',\
'%{WISPR-Location-Name}', '%{WISPR-Location-ID}')"
You'll also need to add additional string type columns in the radacct table to hold the additional values.

Resources