Neo4j 4.4.11 OutofMemory - StatementProcessorTxManager leak suspect - neo4j

After start-up we see an out of memory condition occurring within 24hrs usually, depending on max heap size set.
We have been taking period heap dumps using map. Using the Eclipse Memory Analyzer we can see the following info on a leak suspect:
One instance of ‘org.neo4j.bolt.transaction.StatementProcessorTxManager’ loaded by ‘jdk.internal.loader.ClassLoaders$AppClassLoader # 0xe08f5ea0’ occupies 311,070,080 (90.32%) bytes. The memory is accumulated in one instance of ‘java.util.concurrent.ConcurrentHashMap$Node[]’, loaded by ‘’, which occupies 311,069,848 (90.32%) bytes.
Versions:
Neo4j community (single server) version: 4.4.11 and 4.4.10
Operating system: Ubuntu 22.04.1 LTS running on Azure App Service for Linux as Docker container
API/Driver: Neo4jClient 4.1.26 (HTTP client)
.NET Version .NET core 3.1
We have tried larger heap sizes(currently max of 512Mb) but we just see the time before failure increasing and the memory used by org.neo4j.bolt.transaction.StatementProcessorTxManager increasing.
All queries (no writes) are sent using a single request against the /db/neo4j/tx/commit (by the .Net C# client).
Looking for ideas as to whether this is a database, client or configuration / usage issue that others might have seen and know how to resolve.

Related

Couch DB running on Windows OS is paging while it has ample RAM available

I have three node CouchDB cluster. It is running on windows. Each node has 16vcpu and 64GB RAM. I am fairly new to CouchDB and to nonrelational databases in general.
The cluster is running on windows. What I am struggling with is one of the nodes (which I am assuming is the coordinator) is using the page file about 120GB disk space while it has about 48GB free RAM available to it.
We increased the RAM from 32Gb to 64GB to help with the paging. Only to find out that, it is now using more of the page file since the page file is being currently managed by the Windows OS.
I would assume it would be paging once it used all the available RAM, but what we have is 120GB paging file while it has about 50GB free RAM.
Why is it using the page file which has less response time while it has free RAM available to it?
Wasn't it supposed to use unreserved RAM for disk caching of frequently accessed DB file blocks to speed up access? Why is it behaving this way?
Is there a CouchDB or Erlang Beam configuration setting that I should be looking at?

Neo4j 2.0 Windows: Memory stops being allocated to service after a few service restarts

This is odd behavior but I have been able to reproduce it 100%. I'm currently testing Neo4j 2.0.1 Enterprise on my laptop and desktop machines. Laptop has 8GB RAM i7 4600U and Desktop has 16GB RAM i7 4770k. (Both machines are running Windows 8.1 x64 ENT and the same version of Java (latest as of FEB 19, 2014).
On first boot of each, when I run an expensive (or not so expensive) query, I can see the memory allocation go up (as expected for the cache). When starting the server, initial allocation is around 200-250mb, give or take. After a few expensive queries, it goes up to about 2GB, which is fine, I want this memory allocation. However, I have a batch script that stops the service, clears out the database and restarts the service (to start fresh when testing different development methods).
After 3 or 4 restarts, I noticed that the memory will NEVER climb above 400mb. Processor usage sits around 30-40% during the expensive query, but memory never increases. I will then get Unknown Error messages in the console when running other expensive queries. This is the same query that after a full reboot of the system, would bring the memory usage up to 2GB.
I'm not sure what could be causing this, or if there is a way to make sure that memory usage continues to be allocated, even on service restart. Rebooting a production server doesn't seem like a viable option, unless running in HA.

Solr on Tomcat, Windows OS consumes all memory

Update
I've configured both xms (initial memory) and xmx (maximum memory allocation jvm paramters, after a restart i've hooked up Visual VM to monitor the Tomcat memory usage. While the indexing process is running, the memory usage of Tomcat seems ok, memory consumption is in range of defined jvm params. (see image)
So it seems that filesystem buffers are consuming all the leftover memory, and does not drop memory? Is there a way handle this behaviour, like change nGram size or directoryFactory?
I'm pretty new to Solr and Tomcat, but here we go:
OS Windows server 2008
4 Cpu
8 GB Ram
Tomcat Service version 7.0 (64 bit)
Only running Solr
No optional JVM parameters set, but Solr config through GUI
Solr version 4.5.0.
One Core instance (both for querying and indexing)
Schema config:
minGramSize="2" maxGramSize="20"
most of the fields are stored = "true" (required)
Solr config:
ramBufferSizeMB: 100
maxIndexingThreads: 8
directoryFactory: MMapDirectory
autocommit: maxdocs 10000, maxtime 15000, opensearcher false
cache (defaults): filtercache initialsize:512 size: 512 autowarm: 0
queryresultcache initialsize:512 size: 512 autowarm: 0
documentcache initialsize:512 size: 512 autowarm: 0
We're using a .Net Service (based on Solr.Net) for updating and inserting documents on a single Solr Core instance. The size of documents sent to Solr vary from 1 Kb up to 8Mb, we're sending the documents in batches, using one or multiple threads. The current size of the Solr Index is about 15GB.
The indexing service is running around 3 a 4 hours to complete all inserts and updates to Solr. While the indexing process is running the Tomcat process memory usage keeps growing up to > 7GB Ram and does not reduce, even after 24 hours.
After a restart of Tomcat, or a Reload Core in the Solr Admin the memory drops back to 1 a 2 GB Ram. Memory leak?
Is it possible to configure the max memory usage for the Solr process on Tomcat?
Are there other alternatives? Best practices?
Thanks
You can setup JVM memory setting on tomcat. I usually do this with setenv.bat file in bin directory of tomcat (same directory as the catalina.bat/.sh files are).
Adjust following values as per your needs:
set JAVA_OPTS=%JAVA_OPTS% -Xms256m -Xmx512m"
Here are clear instruction on it:
http://wiki.razuna.com/display/ecp/Adjusting+Memory+Settings+for+Tomcat
At first you have to set XMX parameter to limit maximum memory that can be used by Tomcat. But in case of SOLR you have to remember that it uses a lot of memory outside of JVM to handle filesystem buffers. So never use more than 50% of available memory for Tomcat in this case.
I have the following setup (albeit a much smaller problem)...
5000 documents, document sizes range from 1MB to 30MB.
We have a requirement to run under 1GB for the Tomcat process on a 2 CPU / 2GB system
After bit of experimentation I came up with these settings for JAVA.
-Xms448m
-Xmx768m
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:ParallelCMSThreads=4
-XX:PermSize=64m
-XX:MaxPermSize=64m
-XX:NewSize=384m
-XX:MaxNewSize=384m
-XX:TargetSurvivorRatio=90
-XX:SurvivorRatio=6
-XX:+CMSParallelRemarkEnabled
-XX:CMSInitiatingOccupancyFraction=55
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+OptimizeStringConcat
-XX:+UseCompressedOops
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=5
These helped but I encountered issues with the OutOfMemory and Tomcat using too much memory even with such a small dataset.
Solution Or things/configuration I have set so far that seem to hold well are as follows:
Disable all caches other than QueryResultCache
Do not include text/content fields in your query only include the id
Do not use row size greater than 10 and do not include highlighting.
If you are using highlighting (this is the biggest culprit), get the document identifiers from the query first and then do the query again with highlighting and the search terms with the id field included.
Finally for the memory problem. I had to grudgingly implement an unorthodox approach to solve the tomcat/java memory hogging issue (as java never gives back memory to the OS).
I created a memory governor service that runs with debug privilege and calls windows API to force tomcat process to release memory. I also have a global mutex to prevent access to tomcat while this happens when a call comes in.
Surprisingly this approach is working out well but not without its own perils if you do not have the option to control access to Tomcat.
If you find a better solution/configuration changes please let us know.

Tomcat per webapp memory settings

I am having two webapplication running inside tomcat. Java Heap space is allocated for Tomcat and it is shared for both appliaction. In that one application consumes more and other is getting OUT_OF_MEMORY.
Is there any way to set memory settings per web application. Say 70% for one webapp and 30% for other from the overall memory allocated to Tomcat.
Regards
Ganesh
The memory is defined per JVM instance, so if you are using one tomcat you cannot do it.
However you can run two tomcat instances - one per web application - and then you will finer control on memory allocation for each webapp.
No. There is no way for some portion of java code to control the consumption of memory by other portion of code called from first portion of code. In other words, the web container is just a java program which calls some other java class methods found in application.
So the only control one has is JVM arguments. And this arguments are only to hint the JVM where approximately to fail with out of memory error. No, it is not possible.

make full use of 24G memory for jboss

we have a solaris sparc 64 bit running the jboss. it has 24G mem. but because of JVM limitation, i can only set to JAVA_OPTS="-server -Xms256m -Xmx3600m -XX:MaxPermSize=3600m".
i don't know the exactly cap. but if i set to 4000m, java won't like it.
is there any way to use this 24G mem fully or more efficiently?
if i use cluster on one machine, is it stable? it need rewrite some part of code, i heard.
All 32-bit processes are limited to 4 gigabytes of addressable memory. 2^32 == 4 gibi.
If you can run jboss as a 64-bit process (usually just adding "-d64" to JAVA_OPTS), then you can tune jboss with increased thread and object pools to make use of that memory. As others have mentioned, you might have horrific garbage collection pauses, but you may be able to figure out how to avoid those with some load testing and the right choice of garbage collection algorithms.
If you have to run as a 32-bit process, then yes, you can at least run multiple instances of jboss on the same server. In your case, there's three options: zones, virtual interfaces, and port binding.
Solaris Zones
Since you're running solaris, it's relatively easy to create virtual servers ("non-global zones") and install jboss in each one just like you would the real server.
Multi-homing
Configure an extra IP address for each jboss instance you want to run (usually by adding virtual interfaces, but you could also install extra NICs) and bind each instance of jboss to its own IP address with the "-b" startup option.
Service Binding Manager
This is the most complicated / brittle option, but at least it requires no OS changes.
Whether or not to actually configure the jboss instances as a cluster depends on your application. One benefit is the ability to use http session failover. One downside is cluster merge issues if your application is unstable or tends to become unresponsive for periods of time.
You mention your application is slow; before you go too far down any of these paths, try to understand where your bottlenecks are. Use jvisualvm or jstat to observe if you're doing frequent garbage collections. If you're not, then adding heap memory or extra jboss instances may not resolve your performance woes.
you can't use the full physical memory, JVM requires max contined memory trunck, try use java -Xmxnnnnm -version to test the max available memory on your box.

Resources