What's the best way to record Neo4j queries? - neo4j

I have Neo4j instances in several remote geographical locations, working with local data. I also have a master Neo4j instance which will store data from all locations. In case of network failure I want the remote instances to save queries which can be replayed later at the master instance.
What is the best way to achieve this?
I'm using Neo4j .Net client

Are you using Neo4j Enterprise with the High Availibility mode ? In that case the slaves are writing automatically first to the master.
Otherwise you can log the full requests (meaning the cypher query + parameters)
Make sure to activate the http logging in the neo4j-server.properties configuration file :
org.neo4j.server.http.log.enabled=true
org.neo4j.server.http.log.config=conf/neo4j-http-logging.xml
And in your neo4j-http-logging.xml you can add this pattern in your appenders :
<pattern>%fullRequest\n\n%</pattern>

There are some undocumented logging options (experimental):
dbms.querylog.enabled=true
# in ms
dbms.querylog.threshold=500
dbms.querylog.path=data/log/queries.log

Related

No write operations are allowed directly on this database. Writes must pass through the leader. The role of this server is: FOLLOWER

Im using py2neo(2020.1.0) to connect and make queries in Neo4j, Getting below error
No write operations are allowed directly on this database. Writes must pass through the leader. The role of this server is: FOLLOWER
I use neo4j+s: scheme to connect, When I gone through the articles neo4j+s: will take care of routing. But It seems not working. Is it possible to get around with this ?
You need to use bolt+routing if you want to open connection to FOLLOWER, but writes to be sent to LEADER:
./cypher-shell -a bolt+routing://the_follower:7637

How to configure Serilog for WCF

I am upgrading our old application for Serilog... One of the existing functionality is ... When log level = ERROR, it will log into local file and send 'WCF' request to the remote server, remote server will update database...
Basically it will log into multiple source(local file, remote database by sending wcf request) if it level is 'ERROR'.
I understand using 'rollingfile' appender to logging into local file.
However, i do not know how to configure 'WCF Service' for Serilog... is there any 'WCF SINK' can help me achieve this?
As of this writing there's no generic sink that makes "WCF" calls... You'd have to build your own sink, implementing the calls you need.
You can see a list of documented sinks in the "Provided Sinks" page on Serilog's wiki, and you can also see available sinks in NuGet.org.

Getting Data from Kaa server without using Log Apender

I am using Kaa client to pump data to Kaa server.
I want to fetch this data in order to showcase on a client application. With the use of Log Appenders, I am able to do so.
However, is it possible to do the same without adding any external db? I read in Kaa documentation that by default, Kaa stores data to MySQL (MaraidB / PostGre).
However, when I tried to access Mysql (which is part of Kaa Sandbox), I was unable to do so.
Can anyone tell how can we do this?
Yes, Kaa should be configured to append the gathered telemetry to some Log Appender (one can also create a Customer Log Appender with specific functionality if required) or even a set of Log Appenders - depending on the use case.
The easiest way is to configure one of the existing Log Appenders to log the data to e.g. Cassandra and then retrieve the data from there.
Should you need real time triggering of some actions depending on the data received from client(s), you would probably need to try developing a Custom Log Appender for that.
We need to have a external Log appender in order to log the data. The internal database takes care of Schema, event class families, Client/server profile info, notification info logging.

In REST API for neo4j, how to merge or create nodes?

I am running a local instance of neo4j ("Community" edition) on a Windows 10 laptop. My client is in Java and uses the REST API (via port 7474) to talk with the neo4j database.
QUESTION: is there some way to get the equivalent of a MERGE/CREATE directive in cypher to happen via the REST API call to /db/data/node endpoint?
I'm guessing that I could impose a unique constraint on different node types and achieve the desired behavior. But out of the box, what I am hoping for is a single endpoint -- Eg, /db/data/node -- which either creates or merges the inbound data with any existing nodes in the graph.
You don't have to figure out how to get the "equivalent" of Cypher clauses like MERGE/CREATE. You can use Cypher directly via either of these REST endpoints:
/db/data/transaction
/db/data/cypher
[EDITED]
However, if you only want to use the /db/data/node endpoint, you can take advantage of unique indexing and use either the uniqueness=get_or_create or uniqueness=create_or_fail URL parameters.

Elmah XML Logging on Load Balanced Environment

We're implementing Elmah for an internal application. For development and testing we use a single server instance but on the production environment the app is delivered using a load balanced environment.
Everything works as charm using Elmah, except for the fact that the logs are done independant in each server. What I mean with this is that if an error happens in Server1 the xml file is stored physically on that server and the same for Server2, since I'm storing that files on the App_Data
When I access the axd location to see the error list, I just see the ones of the server that happened to attend my request.
Is there any way to consolidate the xml files other than putting them on a shared folder? Having a shared folder will make us to allow the user that executes the application on the server to have access to that separate folder and to be on only one of the servers instead of both.
I cannot use In-Memory or Database logging since FileLog is the only one allowed.
You might consider using ElmahR for this case, since you are not able to implement In-Memory or Database logging. ElmahR will provide you with a central location for the two load balanced servers to send errors to (in addition to logging them locally) via an Http post. Then you can access the ElmahR site for to view an aggregated list. Also, ElmahR is storing the error messages in a SqlServerCE database, so it can persist the error messages it receives.
Keep in mind that if the ElamhR Dashboard app design does not meet your initial needs/desires, it could be modified as needed given that it is an open source project.
Hope this might be a viable option.

Resources