I want to send an email when my db is down. I don't know how to check neo4j is running or not from php. I am using neoxygen neoclient library to connect to neo4j. Is there any way around to do this ? I am using neo4j 2.3.2
As neo4j is operated by HTTP REST interface, you just need to check if the appropriate host is reachable:
if (#fopen("http://localhost:7474/db/data/","r")) {
// database is up
}
(assuming it's running on localhost)
a) Upgrade to graphaware neo4j-php-client, neoxygen is deprecated since months and has been ported there since more than a year.
b) You can just do a try/catch on a query :
try {
$result = $client->run('RETURN 1 AS x');
if (1 === $result->firstRecord()->get('x') { // db is running // }
} catch(\Exception $e) {
// db is not running or connection cannot be made
}
Related
I have a server on GCR and it pings a db when called. I was thinking of just adding a simple mechanism for cacheing like
var lastDBUpdate int
var lastCache int
if lastDBUpdate > lastCache {
lastCache = now
return newResults
} else {
return cachedResults
}
// endpoints that modify the db update the lastDBUpdate global var
This would work if there was only one container (i.e. while my backend has little load), but as my app grows and multiple containers are created, the lastDBUpdate and lastCache variables will be out of sync amongst the different containers. So how can I cache db reads with GCR?
You can use Memorystore.
Here is a guide how to connect to a Redis instance from Cloud Run.
Stored procedures in Cosmos DB are transactional and run under isolation snapshop with optimistic concurrency control. That means that write conflicts can occur, but they are detected so the transaction is rolled back.
If such a conflict occurs, does Cosmos DB automatically retry the stored procedure, or does the client receive an exception (maybe a HTTP 412 precondition failure?) and need to implement the retry logic itself?
I tried running 100 instances of a stored procedures in parallel that would produce a write conflict by reading the a document (without setting _etag), waiting for a while and then incrementing an integer property within that document (again without setting _etag).
In all trials so far, no errors occurred, and the result was as if the 100 runs were run sequentially. So the preliminary answer is: yes, Cosmos DB automatically retries running an SP on write conflicts (or perhaps enforces transactional isolation by some other means like locking), so clients hopefully don't need to worry about aborted SPs due to conflicts.
It would be great to hear from a Cosmos DB engineer how this is achieved: retry, locking or something different?
You're correct in that this isn't properly documented anywhere. Here's how OCC check can be done in a stored procedure:
function storedProcedureWithEtag(newItem)
{
var context = getContext();
var collection = context.getCollection();
var response = context.getResponse();
if (!newItem) {
throw 'Missing item';
}
// update the item to set changed time
newItem.ChangedTime = (new Date()).toISOString();
var etagForOcc = newItem._etag;
var upsertAccecpted = collection.upsertDocument(
collection.getSelfLink(),
newItem,
{ etag: etagForOcc }, // <-- Pass in the etag
function (err2, feed2, options2) {
if (err2) throw err2;
response.setBody(newItem);
}
);
if (!upsertAccecpted) {
throw "Unable to upsert item. Id: " + newItem.id;
}
}
Credit: https://peter.intheazuresky.com/2016/12/22/documentdb-optimistic-concurrency-in-a-stored-procedure/
SDK does not retry on a 412, 412 failures are related to Optimistic Concurrency and in those cases, you are controlling the ETag that you are passing. It is expected that the user handles the 412 by reading the newest version of the document, obtains the newer ETag, and retries the operation with the updated value.
Example for V3 SDK
Example for V2 SDK
I'm trying to resolve mx records in a kubernetes pod.
The dnsjava library works when tested on mac and ubuntu outside of a container but returns an empty array once deployed.
What needs to be available in k8s or the docker image for this to work?
See https://github.com/dnsjava/dnsjava
EDIT 1
Record[] records;
try {
records = new Lookup(mailDomain, Type.MX).run();
} catch (TextParseException e) {
throw new IllegalStateException(e);
}
if (records != null && records.length > 0) {
for (final Record record : records) {
MXRecord mx = (MXRecord) record;
//do something with mx...
}
} else {
log.warn("Failed to determine MX record for {}", mailDomain);
}
The log.warn is always executed in K8s. The docker image is openjdk:11-jdk-slim i.e. it's Debian. I just tested on Debian outside of Docker and it worked as well.
In the end I couldn't get dnsjava to work in docker/k8s.
I used JNDI directly, following https://stackoverflow.com/a/16448180/400048
this works without any issues exactly as given in that answer.
It looks like previously working approach is deprecated now:
unsupported.dbms.executiontime_limit.enabled=true
unsupported.dbms.executiontime_limit.time=1s
According to the documentation new variables are responsible for timeouts handling:
dbms.transaction.timeout
dbms.transaction_timeout
At the same time the new variables look related to the transactions.
The new timeout variables look not working. They were set in the neo4j.conf as follows:
dbms.transaction_timeout=5s
dbms.transaction.timeout=5s
Slow cypher query isn't terminated.
Then the Neo4j plugin was added to model a slow query with transaction:
#Procedure("test.slowQuery")
public Stream<Res> slowQuery(#Name("delay") Number Delay )
{
ArrayList<Res> res = new ArrayList<>();
try ( Transaction tx = db.beginTx() ){
Thread.sleep(Delay.intValue(), 0);
tx.success();
} catch (Exception e) {
System.out.println(e);
}
return res.stream();
}
The function served by the plugin is executed with neoism Golang package. And the timeout isn't triggered as well.
The timeout is only honored if your procedure code invokes either operations on the graph like reading nodes and rels or explicitly checks if the current transaction is marked as terminate.
For the later, see https://github.com/neo4j-contrib/neo4j-apoc-procedures/blob/master/src/main/java/apoc/util/Utils.java#L41-L51 as example.
According to the documentation the transaction guard is interested in orphaned transactions only.
The server guards against orphaned transactions by using a timeout. If there are no requests for a given transaction within the timeout period, the server will roll it back. You can configure the timeout in the server configuration, by setting dbms.transaction_timeout to the number of seconds before timeout. The default timeout is 60 seconds.
I've not found a way how to trigger timeout for a query which isn't orphaned with a native functionality.
#StefanArmbruster pointed a good direction. The timeout triggering functionality can be got with creating a wrapper function in Neo4j plugin like it is made in apoc.
I had used Tinkerpop and openrdf sail to connect the neo4j at local well.
String dB_DIR = "neo4j//data";
Sail sail = new GraphSail(new Neo4jGraph(dB_DIR));
sail.initialize();
that I can import ttl or rdf file then query0
but now I want to connect the remote neo4j.
How can I use neo4j jdbc in this case ?
or the Tinkerpop blueprint has the way can do it ?
( I did some searching work but no good answer )
You should use the Sesame Repository API for accessing a Sail object.
Specifically, what you need to do is wrap your Sail object in a Repository object:
Sail sail = new GraphSail(new Neo4jGraph(dB_DIR));
Repository rep = new SailRepository(sail);
rep.initialize();
After this, use the Repository object to connect to your store and perfom actions, e.g. to load a Turtle file and then do a query:
RepositoryConnection conn = rep.getConnection();
try {
// load data
File file = new File("/path/to/file.tll");
conn.add(file, file.getAbsolutePath(), RDFFormat.TURTLE);
// do query and print result to STDOUT
String query = "SELECT * WHERE {?s ?p ?o} LIMIT 10";
TupleQueryResult result =
conn.prepareTupleQuery(QueryLanguage.SPARQL, query).evaluate();
while (result.hasNext()) {
System.out.println(result.next().toString());
}
}
finally {
conn.close();
}
See the Sesame documentation or Javadoc for more info and examples of how to use the Repository API.
(disclosure: I am on the Sesame development team)