How can we increase java heap size in neo4j? - neo4j

Can anyone please help me increasing java heap size in neo4j. Please explain required instructions step by step as I am newbie in neo4j. I am not able to understand answers present in other threads and also what's written there in neo4j documentation.

If you are a newbie you don't need to increase java heap size , you can work with the default values ...
In the official documentation here it is well explained ...
Check also here here
What you need to do in general is:
Stop the neo4j service( $NEO4J_HOME/bin/neo4j stop )
Find the $NEO4J_HOME/conf/neo4j.conf file in your system and set this values:
Property Name Meaning
dbms.memory.heap.initial_size initial heap size (in MB)
dbms.memory.heap.max_size maximum heap size (in MB)
Start the neo4j service ( $NEO4J_HOME/bin/neo4j start )
Generally it very depends how much RAM your server has , but without knowing your workload and database size I would recommend following parameters that most probably will be correct for any type of load:
RAM heap
16GB 6GB
32GB 10GB
64GB 24GB
128GB 31GB
You could execute also the neo4j-admin command that can suggest suitable values based on your current database and RAM size:
$NEO4J_HOME/bin/neo4j-admin memrec
Note that beside the heap size , there is one more parameter that affect performance depending on database size and this is the:
dbms.memory.pagecache.size
You can also set it manually in the neo4j.conf

Related

GraalVM Heap Size Setting Recommendations

What is the usual recommended heap memory setting for a production environment, 1 microservice written in Java, if compiled as a native image using GraalVM Community Edition? Should I specify both -xms and -xms to keep the minimum heap and maximum heap size the same?
There is no recommendation from documentation, it says
Java Heap Size
When executing a native image, suitable Java heap settings will be determined automatically based on the system configuration and the used GC. To override this automatic mechanism and to explicitly set the heap size at run time, the following command-line options can be used:
-Xmx - maximum heap size in bytes
-Xms - minimum heap size in bytes
-Xmn - the size of the young generation in bytes
See here for full document. JVM memory management is an old topic discussed many times before
how to choose the jvm heap size?
java - best way to determine -Xmx and -Xms required for any webapp & standalone app
long story short for any application there is no "one" number. Some application may require 4GB some require 64GB depends to the "load" and data used per request(webapp) or OS(win/linux) app runs on. After you monitor the app sometime you can decide. It is not easy so that's why people are going serverless lately.

Understanding JVM Memory on Containers Environment

I'm running a regular JVM application on containers in GCP.
The container_memory_working_set_bytes metric returns 4GB, while sum(jvm_memory_bytes_used) returns 2GB.
I'm trying to understand which processes are using the remaining 2GB.
In theory, what can consume this memory? how can I understand it via Prometheus / Linux shell by kubectl exec?
According to these docs jvm_memory_bytes_used is only heap memory.
JVM can consume much more than that and this has already been asked and answered several times.
One of the best answers that I know about is here: Java using much more memory than heap size (or size correctly Docker memory limit)
I managed to find the difference, It's was the JVM committed memory which can be found via jvm_memory_bytes_commited

How to increase java heap size for neo4j on Windows?

I want to increase the Java heap size from the default 512mb to 24g, so that my 250 GB ram server is optimally utilized by neo4j. I tried changing the -Xms512m value in the neo4j.vmoptions file to -Xm24000m but neo4j fails to start with heap too big error?!?!
I checked that I have more than 150GB free memory at that time. If I change it to 1024 it works and anything more or fails to start.
I'm using 64 bit java on 64bit windows.
Any help on how to increase the heap size to 24GB!? My db size on disk is 10Gb

How to boot CoreOS with different ramdisk size

I am trying to boot CoreOS from a PXE server using ramdisk.
However, no matter what size of ramdisk I specify (with ramdisk_size) CoreoOS always takes half of the memory as a ramdisk.
Can anyone tell me how to specify the ramdisk size at boot?
This has to do with the default nature of temporary filesystems in that they will default to 50% as the limit but won't reserve that much memory; actual usage will grow over time, but will not exceed the 50% limit.
Also you'll find this in the official CoreOS docs regarding PXE:
The filesystem will consume more RAM as it grows, up to a max of 50%.
The limit isn't currently configurable.

Solr on Tomcat, Windows OS consumes all memory

Update
I've configured both xms (initial memory) and xmx (maximum memory allocation jvm paramters, after a restart i've hooked up Visual VM to monitor the Tomcat memory usage. While the indexing process is running, the memory usage of Tomcat seems ok, memory consumption is in range of defined jvm params. (see image)
So it seems that filesystem buffers are consuming all the leftover memory, and does not drop memory? Is there a way handle this behaviour, like change nGram size or directoryFactory?
I'm pretty new to Solr and Tomcat, but here we go:
OS Windows server 2008
4 Cpu
8 GB Ram
Tomcat Service version 7.0 (64 bit)
Only running Solr
No optional JVM parameters set, but Solr config through GUI
Solr version 4.5.0.
One Core instance (both for querying and indexing)
Schema config:
minGramSize="2" maxGramSize="20"
most of the fields are stored = "true" (required)
Solr config:
ramBufferSizeMB: 100
maxIndexingThreads: 8
directoryFactory: MMapDirectory
autocommit: maxdocs 10000, maxtime 15000, opensearcher false
cache (defaults): filtercache initialsize:512 size: 512 autowarm: 0
queryresultcache initialsize:512 size: 512 autowarm: 0
documentcache initialsize:512 size: 512 autowarm: 0
We're using a .Net Service (based on Solr.Net) for updating and inserting documents on a single Solr Core instance. The size of documents sent to Solr vary from 1 Kb up to 8Mb, we're sending the documents in batches, using one or multiple threads. The current size of the Solr Index is about 15GB.
The indexing service is running around 3 a 4 hours to complete all inserts and updates to Solr. While the indexing process is running the Tomcat process memory usage keeps growing up to > 7GB Ram and does not reduce, even after 24 hours.
After a restart of Tomcat, or a Reload Core in the Solr Admin the memory drops back to 1 a 2 GB Ram. Memory leak?
Is it possible to configure the max memory usage for the Solr process on Tomcat?
Are there other alternatives? Best practices?
Thanks
You can setup JVM memory setting on tomcat. I usually do this with setenv.bat file in bin directory of tomcat (same directory as the catalina.bat/.sh files are).
Adjust following values as per your needs:
set JAVA_OPTS=%JAVA_OPTS% -Xms256m -Xmx512m"
Here are clear instruction on it:
http://wiki.razuna.com/display/ecp/Adjusting+Memory+Settings+for+Tomcat
At first you have to set XMX parameter to limit maximum memory that can be used by Tomcat. But in case of SOLR you have to remember that it uses a lot of memory outside of JVM to handle filesystem buffers. So never use more than 50% of available memory for Tomcat in this case.
I have the following setup (albeit a much smaller problem)...
5000 documents, document sizes range from 1MB to 30MB.
We have a requirement to run under 1GB for the Tomcat process on a 2 CPU / 2GB system
After bit of experimentation I came up with these settings for JAVA.
-Xms448m
-Xmx768m
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:ParallelCMSThreads=4
-XX:PermSize=64m
-XX:MaxPermSize=64m
-XX:NewSize=384m
-XX:MaxNewSize=384m
-XX:TargetSurvivorRatio=90
-XX:SurvivorRatio=6
-XX:+CMSParallelRemarkEnabled
-XX:CMSInitiatingOccupancyFraction=55
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+OptimizeStringConcat
-XX:+UseCompressedOops
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=5
These helped but I encountered issues with the OutOfMemory and Tomcat using too much memory even with such a small dataset.
Solution Or things/configuration I have set so far that seem to hold well are as follows:
Disable all caches other than QueryResultCache
Do not include text/content fields in your query only include the id
Do not use row size greater than 10 and do not include highlighting.
If you are using highlighting (this is the biggest culprit), get the document identifiers from the query first and then do the query again with highlighting and the search terms with the id field included.
Finally for the memory problem. I had to grudgingly implement an unorthodox approach to solve the tomcat/java memory hogging issue (as java never gives back memory to the OS).
I created a memory governor service that runs with debug privilege and calls windows API to force tomcat process to release memory. I also have a global mutex to prevent access to tomcat while this happens when a call comes in.
Surprisingly this approach is working out well but not without its own perils if you do not have the option to control access to Tomcat.
If you find a better solution/configuration changes please let us know.

Resources