Neo4j Server: How to set connection timeouts - neo4j

How do set - in my case raise - the connection timeouts of the Neo4j server? I have a server extension to which I POST data, sometimes that much that the extension is running for a couple of minutes. But after 200 seconds, the connection is dropped by the server. I think I have to raise the max idle time of the embedded jetty - but I don't know how to do that since it's all configured within the Neo4j Server code.
Edit: I've tried both Neo4j 1.8.2 and 1.9.RC2 with the same result.
Edit2: Added the "embedded-jetty" tag because there are no answers until now; perhaps the question can be answered by someone with knowledge about embedded Jetty since Neo4j uses an embedded Jetty.
Thank you!

I still don't know if there is a solution in the Neo4j server with versions <2.0. However, with switching to 2.0.0 and above, this issue was gone for my case.

The server guards against orphaned transactions by using a timeout. If there are no requests for a given transaction within the timeout period, the server will roll it back. You can configure the timeout period by setting the following property to the number of seconds before timeout. The default timeout is 60 seconds.
org.neo4j.server.transaction.timeout=60
See http://docs.neo4j.org/chunked/stable/server-configuration.html

Related

How to configure a query timeout in Neo4j 3.0.1

I'd like to set a query timeout in neo4j.conf for Neo4j 3.0.1. Any query taking longer than the timeout should get killed. I'm primarily concerned with setting the timeout for queries originating from the Neo4j Browser.
It looks like this was possible in the past with:
execution_guard_enabled=true
org.neo4j.server.webserver.limit.executiontime=20000
However, this old method doesn't work for me. I see Neo4j 3.0 has a dbms.transaction_timeout option defined as a "timeout for idle transactions". However, this setting also doesn't seem to do the trick.
Thanks to #stdob for the comment explaining a solution.
In Neo4j 3.0.1 Community, I verified that the following addition to neo4j.conf enabled a query timeout of 1 second for Browser queries:
unsupported.dbms.executiontime_limit.enabled=true
unsupported.dbms.executiontime_limit.time=1s
I did not check whether the timeout applies to queries oustide of Neo4j Browser, but I assume so. I did find some documentation in the Neo4j codebase for unsupported.dbms.executiontime_limit.time:
If execution time limiting is enabled in the database, this configures the maximum request execution time.
I believe dbms.transaction.timeout is the current way of limiting execution time

Configuring time out in IIS 8.5

I've got a site which has an export feature. This feature can export parts of the database up to a full database export. This has been optimized a lot but still requires some 90-180 seconds to finish. Debugging and time outs aren't an issue, but live I receive a 504 gateway time out error after about 90 secs. I am guessing that IIS gets tired of waiting for the backend to respond and returns a 504. Is there any way to specify a longer time out, e.g. 5 minutes?
I've got an old executionTimeout setting set to 3600 which doesn't seem to do much any more (I believe it's an IIS <7 setting).
I've also tried this suggestion form another Stackoverflow question:
<configuration>
<system.applicationHost>
<webLimits connectionTimeout="00:01:00"
dynamicIdleThreshold="150"
headerWaitTimeout="00:00:30"
minBytesPerSecond="500"/>
</system.applicationHost>
</configuration>
The above doesn't work, the config file is broken. Is the above supposed to work?
Main question: how/can I increase waiting time in IIS to avoid 504s?
This wasn't an IIS issue. I believe the old way of providing a timeout worked. We've got a reverse proxy with a short timeout, this was changed to a longer timeout which effectively resolved the issue.

google cloud sql redmine mysql not responding

I've been trying to set up a Redmine on google compute engine with the mysql 5.5 database hosted on google cloud sql (d1, 512mb of ram, always-on, europe, package-billed).
Unfortunately, Redmine stops responding (really stops, I set the timeout to 1hour and nothing happens) to requests after a few minutes. Using newrelic I found out that it's database-related - ActiveRecord seems to have some problems with the database ..
In order to find out if the problems are really related to the cloud sql database, I set up a new database on my own server and it's working fine since then. So there definitely is an issue with the cloud sql database and redmine/ruby.
Does anyone have an idea what I can try to solve the problem?
Best,
Jan
GCE idle connections are closed automatically after 10 minutes as explained in [1]. As you are connecting to CloudSQL from a GCE instance, this is most likely the cause for your issue.
Additionally, take into account Cloud SQL instances can go down and come back anytime due to maintenances and connections must be managed accordingly. Checking the CloudSQL instance operation list would confirm this. Hope this helps.
[1] https://cloud.google.com/sql/docs/gce-access

HBase 0.98.1 Put operations never timeout

I am using 0.98.1 version of HBase server and client. My application has strict response time requirements. As far as HBase is concerned, I would like to abort the HBase operation if the execution exceeds 1 or 2 seconds. This task timeout is useful in case of Region-Server being non-responsive or has crashed.
I tired configuring
1) HBASE_RPC_TIMEOUT_KEY = "hbase.rpc.timeout";
2) HBASE_CLIENT_RETRIES_NUMBER = "hbase.client.retries.number";
However, the Put operations never timeout (I am using sync flush). The operations return only after the Put is successful.
I looked through the code and found that the function receiveGlobalFailure in AsyncProcess class keeps resubmitting the task without any check on the retires. This is in version 0.98.1
I do see that in 0.99.1 there have been some changes to AsyncProcess class that might do what I want. I have not verified it though.
My questions are:
Is there any other configuration that I missed that can give me
the desired functionality.
Do I have to use 0.99.1 client to
solve my problem? Does 0.99.1 solve my problem?
If I have to use 0.99.1 client, then do I have to use 0.99.1 server or can I still use my existing 0.98.1 region-server.

Neo4j server seems to drop connection when processing for 200 seconds

I've been writing a Neo4j server extension (as described here, i.e. a managed server extension: http://docs.neo4j.org/chunked/stable/server-plugins.html). It should just get a string via POST (which in productivity would hold information the extension should process, but this is of no further concern here). It tried the extension with Neo4j 1.8.2 and 1.9.RC2, the outcome was the same.
Now my problem is that sometimes this extension does quite a lot of work which can take a couple of minutes. However, after exactly 200 seconds, the connection gets lost. I'm not absolutely sure what is happening, but it seems the server is dropping the connection.
To verify this behavior, I wrote a new, almost empty server extension which does nothing else but to wait 5 minutes (via Thread.sleep()). From a test-client-class, I POST some dummy data. I tested with Jersey, Apache HTTPcomponents and plain Java URL connections. Jersey and plain Java do a retry after exactly 200 seconds, the HTTPcomponents throw " org.apache.http.NoHttpResponseException: The target server failed to respond".
I think it's a server issue, first because the exception seems to stand for that in this context (there's a comment saying that in the httpcomponent's code) and second because when I set connection timeout and/or socket timeout to lower values than 200 seconds, I get just normal timeout exceptions.
Now there's one thing on top of that: I said I would POST some data. Seemingly this whole behavior depends on the amount of data sent. I pinned it down so far I can say, when sending a string of length ca. 4500 characters, the described behavior does NOT happen, but everything is alright and I get an HTTP 204 "no content" response which is correct.
As soon as I send ca. 6000 characters or more, the mentioned connection drop occurs. The string I'm sending here is only dummy data. The sent string is only a sequence of 'a', i.e. "aaaaaaaa..." created with a for loop with 4500 or 6000 iterations, respectively.
In my productive code I would really like to wait until the server operation has finished, but I don't how to prevent the connection drop.
Is there an option on the Neo4j server to configure (I looked but didn't find anything) or isn't it the server's fault and my clients do something wrong? A bug somewhere?
Thanks for reading and any help or hints!
Just to wrap this up: I eventually found out that there exists a default timeout constant in Jetty (version 6.x was used by Neo4j back then, I think) set to exactly 200 seconds. This could be changed using the Jetty API but the Neo4j server did not appear to offer any possibility to configure this.
Changing to Neo4j 2.x eventually solved the issue (why exactly is unknown). With those newer version of Neo4j the issue did not come up anymore.

Resources