Neo4j slow loading - neo4j

I'm crawling(using sampling API) twitter and saving the
crawled data into a Neo4j database.
When the total number of nodes exceed 20,000, my neo4j takes for ever to start.
It will just stoping at "...waiting for server to be ready ..." and nothing happens.
I normally wait for about 5~10 mins before I terminate the start, and so far I was unable to start process the server with that amount of node.
However, when I remove the "data" directory everything will start just fine.
I have inspected the neo4j.log file and found the following as well:
May 26, 2013 9:21:53 PM org.neo4j.server.logging.Logger log
INFO: Setting startup timeout to: 120000ms based on -1
I was wondering does Neo4j loads everything into memory during the service startup ?
What should I do to speed up the startup time of the service ?

One way is to check with initial memory recommendation using: https://neo4j.com/docs/operations-manual/current/tools/neo4j-admin-memrec/
And then change heap (it uses heap to store state of the graph along with query etc).

Related

neo4j 3.5.x GC running over and over again, even after just starting the server

Our application uses neo4j 3.5.x (tried both community and enterprise editions) to store some data.
No matter how we setup memory in conf/neo4j.conf (tried with lots of combinations for initial/max heap settings from 4 to 16 GB), the GC process runs periodically every 3 seconds, putting the machine to its knees, slowing the whole system down.
There's a combination (8g/16g) that seems to make stuff more stable, but a few minutes (20-30) after our system is being used, GC kicks again on neo4j and goes into this "deadly" loop.
If we restart the neo4j server w/o restarting our system, as soon as our system starts querying neo4j, GC starts again... (we've noticed this behavior consistently).
We've had a 3.5.x community instance which was working fine from last week (when we've tried to switch to enterprise). We've copied over the data/ folder from enterprise to community instance and started the community instance... only to have it behave the same way the enterprise instance did, running GC every 3 seconds.
Any help is appreciated. Thanks.
Screenshot of jvisualvm with 8g/16g of heap
On debug.log, only these are significative:
2019-03-21 13:44:28.475+0000 WARN [o.n.b.r.BoltConnectionReadLimiter] Channel [/127.0.0.1:50376]: client produced 301 messages on the worker queue, auto-read is being disabled.
2019-03-21 13:45:15.136+0000 WARN [o.n.b.r.BoltConnectionReadLimiter] Channel [/127.0.0.1:50376]: consumed messages on the worker queue below 100, auto-read is being enabled.
2019-03-21 13:45:15.140+0000 WARN [o.n.b.r.BoltConnectionReadLimiter] Channel [/127.0.0.1:50376]: client produced 301 messages on the worker queue, auto-read is being disabled.
And I have a neo4j.log excerpt from around the time the jvisualvm screenshot shows but it's 3500 lines long... so here it is on Pastebin:
neo4j.log excerpt from around the time time the jvisualvm screenshot was taken
JUST_PUT_THIS_TO_KEEP_THE_SO_EDITOR_HAPPY_IGNORE...
Hope this helps, I have also the logs for the Enterprise edition if needed, though they are a bit more 'cahotic' (neo4j restarts in between) and I have no jvisualvm screenshot for them

Why My web site time out while running JMeter load Test

I'm new to JMeter and. I followed this tutorial to learn JMeter.
I tried to do a load tested under following conditions.
Number of Threads (Users) - 1000
Ramp-Up Period (in seconds) - 10
Loop Count - 5
While I'm running the test, I tried to load my website (after clear cache)But, it takes more than usual time to load the page. This issue doesn't occur when the browser has cached data.
Can someone please tell me why this is happening? Is it because of when 1000 users load my site, it may crash or something?
Any kind of explanation will be appreciated.
While running your JMeter test if you try to load your website (after clear cache), it will always take more time to load than usual.It's because you have cleared the cache and now the browser needs to render the page resources again to load your desired page.After loading is complete and if you try to load the page again without clearing cache, it will take less time to load the page this time.Browser does not fetch page resources every time, rather the browser saves it in its cache.So next time when you try to open or load that page, the browser could use those cache to open that page for you in the shortest period of time. So for the first time when a browser load a page it takes more time than loading that specific page later(without clearing cache).
Another point is , as your Jmeter test was running while you tried to load your website, it will take a longer time to load your website.Because your application was already handling some requests send by JMeter.So handling extra load will impact on your website page response time.
Ramp up time 10sec for 1000 users!!!
It is not the best practice. You have to give enough time to warm up those 1000 users. 10 sec is too small to be the ramp up time for 1000 users.So during the JMeter test period, it is obvious that your browser will take an unexpected time to load your webpage(using Browser) or end up notifying "Connection Timeout".It necessarily doesn't mean that your application is crashed. It's simply because of unrealistic test script design in JMeter.
Could you elaborate on the type of webserver software you are using e.g?
- Apache HTTPD 2.4 / Nginx / Apache Tomcat / IIS
And the underlying operating system?
- Windows (Server?) / Mac OS X / Linux
If your webserver machine is not limited by the maximum performance of your CPU, Disk etc. (check the Task Manager) your performance might be limited by the configuration of Apache.
Could you please check the Apache HTTPD log files for relevant warnings?
Depending on your configuration (httpd.conf + any files "Include"d from there) you may be using the mpm_winnt worker, that has a configurable number of worker threads which by default is 64 according to:
https://httpd.apache.org/docs/2.4/mod/mpm_common.html#threadsperchild
Once these are all busy new requests from any client (your browser, your loadtest, etc.) will have to wait for their turn.
Try and see what happens if you increase the number of threads!

How to debug Neo4J stalls?

I have a Neo4J server running that periodically stalls out for 10s of seconds. The web frontend will say it's "disconnected" with the red warning bar at the top, and a normally instant REST query in my application will apparently hang, until the stall ends and then everything returns to normal. The web frontend becomes usable and my REST query completes fine.
Is there any way to debug what is happening during one of these stall periods? Can you get a list of currently running queries? Or a list of what hosts are connected to the server? Or any kind of indication of server load?
Most likely JVM garbage collection kicking in because you haven't allocated enough heap space.
There's a number of ways to debug this. You can for example enable GC logging (uncomment appropriate lines in neo4j-wrapper.conf), or use a profiler (e.g. YourKit) to see what's going on and why the pauses.

Neo4j 2.0.4 browser cannot query large datasets

Whenever I try to run cypher queries in Neo4j browser 2.0 on large (anywhere from 3 to 10GB) batch-imported datasets, I receive an "Unknown Error." Then Neo4j server stops responding, and I need to exit out using Task Manager. Prior to this operation, the server shuts down quickly and easily. I have no such issues with smaller batch-imported datasets.
I work on a Win 7 64bit computer, using the Neo4j browser. I have adjusted the .properties file to allow for much larger memory allocations. I have configured my JVM heap to 12g, which should be fine for 64bit JDK. I just recently doubled my RAM, which I thought would fix the issue.
My CPU usage is pegged. I have the logs enabled but I don't know where to find them.
I really like the visualization capabilities of the 2.0.4 browser, does anyone know what might be going wrong?
Your query is taking a long time, and the web browser interface reports "Unknown Error" after a certain timeout period. The query is still running, but you won't see the results in the browser. This drove me nuts too when it first happened to me. If you run the query in the neo4j shell you can verify whether or not this is the problem, because the shell won't time out.
Once this timeout occurs, you can find that the whole system becomes quite non-responsive, especially if you re-run the query, because now you have two extremely long queries running in parallel!
Depending on the type of query, you may be able to improve performance. Sometimes it's as simple as limiting the number of returned nodes (in cases where you only need to find one node or path).
Hope this helps.
Grace and peace,
Jim

Neo4j, There is not enough space on the disk

In our application which is using neo4j-1.8.2 we have so called synchronization process. This process reads some data from SQL Server db, processes it in some way and makes appropriate changes to the graph database. In the case if we have disk space outage (those disk where we have neo4j database located), neo4j server stops working (it is still running stops answering the queries). In neo4j web admin I have the following response for each cypher query- "Failed to get current transaction.". In the log file I see:
SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
...
Caused by: java.io.IOException: There is not enough space on the disk
My question is: when I clean some content from disk and around 10GB of free space appeared, does it mean that neo4j server will start working (answering to queries) again automatically OR do I need to restart it?
I see that it is not working after cleaning some content, I have to restart it, then it starts working again? I want to know if this is expected or can I do something to avoid restarting neo4j server?
Thanks in advance,
Denys
You have to restart Neo4j when running out of disc. Best practice is to setup some monitoring system giving you alert if free disc space goes below a certain capacity.

Resources