Neo4j keeps crashing - neo4j

Neo4j is crashing on me multiple times per day. I am running the Enterprise 2.1.2 edition on a Ubunutu 14 box with 16gb RAM and a 6-core proc. The graph is not very large, however I am continually running out of heap space, and the database becomes unresponsive requiring a restart. This is horrible as it is crashing the website several times per day, I am super frustrated, and not really sure what to do.
I would be happy to provide any additional info you may require to help me debug.
Thanks in advance for your help.

Related

Quick and dirty way to solve/kill memory increses on Heroku

I have an app running on Heroku with a few thousand visitors per day. I do not update it very often as it runs well anyway. Recently, however, I started getting memory increases in a way I have never had before. I am 98% sure it does not have to do with any changes in the code that I have done, as I have not done anything to the code in quite a while. I know from experience that tracing down memory issues is extremely difficult and time-consuming - and at the moment I don't have the time to do it.
Considering the fact that I get this stair-case increase in memory over time (over the course of a few hours) once a day, is there a quick and dirty way of just restarting the server once it starts doing so, so it won't slow down the server for those hours? Like
RestartApp if ServerMemory > 500 Mb
or the likes of it?
I am running ruby 2.4.7 (is that likely an issue in terms of memory increases?) and Rails 4.2.10

Neo4j HA Servers keep failing

We have just put our system into production and we have a lot of users on the production system. Our servers keep failing and we are not sure why. It seems to start with one server then it elects a new master and then a few minutes later all the servers go down in the cluster. I have it setup to send all the writes to the read databases and to leave the writes to the master. I have looked through the logs and cannot seem to find a root cause. Let me know what logs I should upload and or where I should look. Today alone we have had to restart the servers 4 times and it fixes it for a bit but its not a cure for the issue.
All databases are 16GB ram and 8 cpus and SSDs. I have them setup with the following settings in the neo4j.properties
neostore.nodestore.db.mapped_memory=1024M
neostore.relationshipstore.db.mapped_memory=2048M
neostore.propertystore.db.mapped_memory=6144M
neostore.propertystore.db.strings.mapped_memory=512M
neostore.propertystore.db.arrays.mapped_memory=512M
We are using newrelic to monitor the server and we do not see the hardware getting above 50% CPU and 40% memory so we are pretty sure that is not it.
Any help is appreciated :)

Memory leak behavior in Neo4j community edition

I previously posted on the neo4j mailing list (https://groups.google.com/forum/#!topic/neo4j/zn-7lKHVvNI) but haven't received any response from the community, so I'm x-posting here...
I've noticed what appears to be memory leak behavior from neo4j community edition. Running this test code (https://gist.github.com/mlaldrid/85a03fc022170561b807) against 2.1.2 (also tested against 2.0.3) and a 512MB heap results in GC churn after a few hundred thousand cypher queries. Eventually I either get an OutOfMemory error or jetty times out.
However, when I run the same test code against an eval copy of the neo4j enterprise edition it proceeds though 3.5M queries with no signs of bumping up against the 512MB heap limit. I killed the test after that, satisfied that the behavior was sufficiently different from the community edition.
My questions are thus: Why is this memory leak behavior different in the community and enterprise editions? Is it something that the enterprise edition's "advanced caching" feature solves? Is it a known but opaque limitation of the community edition?
Thanks for any insight on this issue.
This is a recently discovered memory leak in 2 of the 4 cache types available for the community edition (weak and soft caches). It does not affect enterprise as enterprise uses the 'hpc' cache per default.
It only affects deployments where you are unlikely to read from the existing data in the db, or where the majority of load on the system is writes.
We've got a fix for this which will go out in subsequent releases. For now, if your use case is unfortunate enough to trigger this issue, you'll need to use either the 'strong' cache or 'none' in community, or switch to enterprise until the next patch release.
I'm posting output of Sampler of jvisualvm
I guess this answer the question as the leak is still there in 2.2.0
Edit:
The problem was using ExecutionEngine. I used execute method on GraphDatabaseService instead and it solved my problem

Rails Server Memory Leak/Bloating Issue

We are running 2 rails application on server with 4GB of ram. Both servers use rails 3.2.1 and when run in either development or production mode, the servers eat away ram at incredible speed consuming up-to 1.07GB ram each day. Keeping the server running for just 4 days triggered all memory alarms in monitoring and we had just 98MB ram free.
We tried active-record optimization related to bloating but still no effect. Please help us figure out how can we trace the issue that which of the controller is at fault.
Using mysql database and webrick server.
Thanks!
This is incredibly hard to answer, without looking into the project details itself. Though I am quite sure you won't be using Webrick in your target production build(right?), so check if it behaves the same under Passenger or whatever is your choice.
Also without knowing the details of the project I would suggest looking at features like generating pdfs, csv parsing, etc. Seen a case, where generating pdf files have been eating resources in a similar fashion, leaving like 5mb of not garbage collected memory for each run.
Good luck.

Delayed Jobs leaking memory?

I'm using collectiveidea's delayed_job with my Ruby on Rails app (v2.3.8), and running about 40 background jobs with it on an 8GB RAM Slicehost machine (Ubuntu 10.04 LTS, Apache 2).
Let's say I ssh into my server with no workers running. When I do free -m, I'm see I'm generally using about 1GB of RAM out of 8. Then after starting the workers and waiting about a minute for them to be utilized by the code, I'm up to about 4GB. If I come back in an hour or two, I'll be at 8GB and into the swap memory, and my website will be generating 502 errors.
So far I've just been killing the workers and restarting them, but I'd rather fix the root of the problem. Any thoughts? Is this a memory leak? Or, as a friend suggested, do I need to figure out a way to run garbage collection?
Actually, Delayed::Job 3.0 leaks memory in Ruby 1.9.2 if your models have serialized attributes. (I'm in the process of researching a solution.)
Here's someone who seemed to have solved it, http://spacevatican.org/2012/1/26/memory-leak-in-yaml-on-ruby-1-9-2
Here's the issue from Delayed::Job https://github.com/collectiveidea/delayed_job/issues/336
Just about every time someone asks about this, the problem is in their code. Try using one of the available profiling tools to find where your job is leaking. ( https://github.com/wycats/ruby-prof or similar.)
Triggering GC at the end of each job will reduce your max memory usage at the cost of thrashing your throughput. It won't stop Ruby from bloating to the max size required by any individual job, however, since Ruby can't free memory back to the OS. I don't recommend taking this approach.

Resources