I have an application that writes to my neo4j database. Immediately after this write, another application performs a query and expects the previously written item as the result.
This doesn't happen, I don't get any result from my query.
Introducing a 100ms artificial delay between the write and the query yields the expected result, but that's not feasible.
I'm writing in TypeScript using neo4j-driver. I'm awaiting every promise the API's throwing at me. I even promisified the session.close function and I await that too (not sure if that does anything).
Is there a cache on neo4j's side that could be at fault? Can I somehow flush it?
Related
I have been trying to use the Durable Functions HTTP API Get Instances call to get a list of Completed/Failed/Terminated instances to delete over a given time period, batched in groups of 50: /runtime/webhooks/durabletask/instances?code=xxx&createdTimeFrom=2021-11-06T00:00:00.0Z&createdTimeTo=2021-11-07T00:00:00.0Z&top=50
As per the documentation, if the response contains the x-ms-continuation-token header then there are more results and I should make another call adding the x-ms-continuation-token to the request headers... even if I get no results in the body (the first few calls always seem to return no results but then I start getting results after that for a while before dropping back to no results). My issue is that this never seems to end because there is always a continuation token even after running for 20+ minutes and hundreds of calls for the same date range. This doesn't happen for the Durable Function Monitor extension for VS Code.
What am I missing from the documentation that will tell me when to stop looking for more records if the x-ms-continuation-token header is always present?
I am sure that I'm overlooking some simple thing here.
With a FireDac connection, if I use a SQL query with a WHERE clause that, due to the content of the search parameter, would normally return an empty result set, the OPEN command returns an error instead "Cannot open / define command, which does not return result sets". Is this by design ? Every other Delphi DB connection tool I have used simply returns an empty result set with a record count of 0.
******************************* April 16
I believe Victoria is on the right track. I had never used Firedac before so assumed it was the behaviour as designed. However if I communicate with the same RDBMS using the MS SQL driver, I do not see this happen, so I suspect it is on the Datasnap end.
If I expose a VIEW
CREATE VIEW myView AS
SELECT ...
FROM ...
via xsodata
service namespace "oData" {
entity "mySchema"."myView" as "myView";
}
and GET /myView for the first time after VIEW creation the performance is very low:
However: After performing the same request again (and everytime after that) the performance is what I want it to be:
Questions:
Why?
How to avoid the first long-running request?
Already tried:
Execution of the sql profiler-output (without statement preparation) in HANA Studios SQL console gives good performance always
Table hotloading (LOAD myTable ALL;) had no effect
Update
We found out the "Why"-Part: xs-engine is running the query as a prepared statement even if there are no parameters in the request. On first execution (within the user's context) the query gets perpared, resulting in an entry in M_SQL_PLAN_CACHE (SELECT * FROM M_SQL_PLAN_CACHE WHERE USER_NAME = 'myUser'). Clearing the plan cache (ALTER SYSTEM CLEAR SQL PLAN CACHE) makes the oData request slow again, leading to the assumption that the performance gap lies in the re-preparation of the query.
We are now stuck with the 2nd question: How to avoid that? Our approach to mark certain plan cache entries for recompilation (ALTER SYSTEM RECOMPILE SQL PLAN CACHE ENTRY 123) just invalidated the entry and did not update it automatically...
I'm not to sure you can REMOVE the first execution long time, but you can try changing the view to a Calculation View executed in the SQL Engine.
HANA is been super optimized for using its Calculation Views, and the Plan Cache should run faster with them, maybe reducing the first execution time significantly. Also, Plan Cache of Calc. Views should be shared between users (since _SYS_REPO is the one who generates them).
If you use the script version I believe you could reuse a lot of your current SQL, but you can also try using the graphical approach as well.
Let us know if you had any luck. Modeling with Big Data is always a surprise.
Our Delphi 7 application communicates with the OpenOffice Calc DDE service, sOffice, using DDEML. It uses the service to read from a spreadsheet.
We've ran into a curious issue. After a large number of calls to 'DdeClientTransaction', the function returns a value of zero, indicating that it has failed. This failure is accompanied with the error 'DMLERR_NOTPROCESSED', which, according to http://www.opcdatahub.com/Docs/dhw-ax-windowsddeerrornumbers.html, means 'Receiving task was not interested in message'.
This is what we would expect to see if the DDE command was invalid. That is definitely not the case here. It happens after 16375 calls to 'DdeClientTransaction'. We can replicate this every time, over different spreadsheets.
To further confuse things, if we call DDEConnect after this failure, it returns a negative value. As far as we can tell, this is undocumented behavior. The function should return a positive handle or zero to indicate failure.
What's going on with the DDE connection and how do we fix it?
It would appear that the savechanges method on breeze waits indefinitely when calling to or waiting for the server. Is there a way of getting it to time out? I am calling save change with allowConcurrentSaves: false. This now causes users who somehow do not get a response from the server to simply hang in limbo indefinitely say for example with a dropped internet connection.
I do not want to re-call the method with allowConcurrentSaves to false fearing that I might duplicate the data.
Any ideas?
Thanks
Update 16 May 2014
You can set HTTP-level timeout and cancellation with the AJAX Adapter's requestInterceptor as of v.1.4.12. See the documentation, "Controlling AJAX calls".
I'd still be reluctant to use this feature on save as you have no chance of knowing what whether the server persisted the data or not. Of course if your client hangs or crashes you don't know anyway. It's up to you.
Original Answer
Actually, there is a ready-made solution from Q.js. It's called timeout and it's mentioned in the API reference with a simplified example of its implementation and use in the readme.md.
I know you asked about Save but your question is pertinent for promises in general. Here is a query example adapted from the queryTests.js in our DocCode Sample
var timeoutMs = 10000; // 10 second timeout
var em = newEm(); // creates a new EntityManager
var query = new EntityQuery().from("Customers").using(em);
Q.timeout(query.execute, timeoutMs)
.then(queryFinishedBeforeTimeout)
.fail(queryFailedOrTimedout);
function queryFailedOrTimedout(error) {
var expect = /timed out/i;
var emsg = error.message;
if (expect.test(emsg)) {
log("Query timed out w/ message '{0}' " + expectTimeoutMsg)
.format(emsg));
// do something
} else {
handleFail(error);
}
}
Note: I just added this test so you'd have to get if from github or wait for a Breeze release after 1.2.5.
Oops ... maybe not
I gave what I think is a great answer for query. It may not be the right answer for save.
The problem with save is that you do not know on the client if the save succeeded until the server responds. Things could go wrong anywhere along the way. The server might not have heard the request to save. The server may have failed during save. The server may have saved the data but the response never made it back to the client.
Changing the value of allowConcurrentSaves won't get you out of this bind. Neither will having a save timeout.
In fact, adding a timeout to the save is probably deceiving. It is even possible for the save response to arrive after your custom timeout ... in which case Breeze will have tried to update your EntityManager ... and you won't know if Breeze succeeded or failed!
What if we added a Breeze save timeout. What should it do? What if breeze said the save had timedout ... and Breeze ignored a belated response from the server? Then imagine that the save succeeded on the server - it just took "too long" for it to respond to the client. Now you've got a client whose state is unexpectedly out of sync with the server. This is not good.
So I think you want a different solution to this very real problem. It's a user experience problem really. You can indicate to the user that you think the save is still in progress and then set your own timer. If the save isn't done when your timer expires, you can query the server to see if the data have been saved or if there is a connection ... or something along these lines. I can't think of a better way right now honestly.
Note that I'm assuming you need to know that the server succeeded. If you avoid store-generated IDs and always assume saves succeed unless the server tells you otherwise ... well that's a completely different paradigm and programming model that we could talk about someday (see meteorjs).
The net of all of this: I'm pretty darned sure that a save timeout is NOT what you want.
Still useful on a query though :)
Great question, and I wish I had a good answer. But it is definitely worth looking into. Could you please add this as a feature request to the Breeze User Voice. We take these requests very seriously in determining our priorities for Breeze development.