GraalVM Heap Size Setting Recommendations - graalvm

What is the usual recommended heap memory setting for a production environment, 1 microservice written in Java, if compiled as a native image using GraalVM Community Edition? Should I specify both -xms and -xms to keep the minimum heap and maximum heap size the same?

There is no recommendation from documentation, it says
Java Heap Size
When executing a native image, suitable Java heap settings will be determined automatically based on the system configuration and the used GC. To override this automatic mechanism and to explicitly set the heap size at run time, the following command-line options can be used:
-Xmx - maximum heap size in bytes
-Xms - minimum heap size in bytes
-Xmn - the size of the young generation in bytes
See here for full document. JVM memory management is an old topic discussed many times before
how to choose the jvm heap size?
java - best way to determine -Xmx and -Xms required for any webapp & standalone app
long story short for any application there is no "one" number. Some application may require 4GB some require 64GB depends to the "load" and data used per request(webapp) or OS(win/linux) app runs on. After you monitor the app sometime you can decide. It is not easy so that's why people are going serverless lately.

Related

How can we increase java heap size in neo4j?

Can anyone please help me increasing java heap size in neo4j. Please explain required instructions step by step as I am newbie in neo4j. I am not able to understand answers present in other threads and also what's written there in neo4j documentation.
If you are a newbie you don't need to increase java heap size , you can work with the default values ...
In the official documentation here it is well explained ...
Check also here here
What you need to do in general is:
Stop the neo4j service( $NEO4J_HOME/bin/neo4j stop )
Find the $NEO4J_HOME/conf/neo4j.conf file in your system and set this values:
Property Name Meaning
dbms.memory.heap.initial_size initial heap size (in MB)
dbms.memory.heap.max_size maximum heap size (in MB)
Start the neo4j service ( $NEO4J_HOME/bin/neo4j start )
Generally it very depends how much RAM your server has , but without knowing your workload and database size I would recommend following parameters that most probably will be correct for any type of load:
RAM heap
16GB 6GB
32GB 10GB
64GB 24GB
128GB 31GB
You could execute also the neo4j-admin command that can suggest suitable values based on your current database and RAM size:
$NEO4J_HOME/bin/neo4j-admin memrec
Note that beside the heap size , there is one more parameter that affect performance depending on database size and this is the:
dbms.memory.pagecache.size
You can also set it manually in the neo4j.conf

ls -Xmx memory limit for Intellij IDEA shared between open project windows?

Intellij IDEA memory settings may be customized by editing -Xmx option at idea64.vmoptions. Let's say, I set
-Xmx2g
and then open 5 different projects at the same time. Can IDEA eventually consume up to ~10gb of memory, or it will be limited to 2gb+some overhead?
At memory usage monitor in the lower right corner of IDEA window, I see the different value of allocated memory for each project. On the other hand, these values seem to correlate over time.
It's the same single process running in the same JVM, so the limit is for all the windows/projects.

Solr on Tomcat, Windows OS consumes all memory

Update
I've configured both xms (initial memory) and xmx (maximum memory allocation jvm paramters, after a restart i've hooked up Visual VM to monitor the Tomcat memory usage. While the indexing process is running, the memory usage of Tomcat seems ok, memory consumption is in range of defined jvm params. (see image)
So it seems that filesystem buffers are consuming all the leftover memory, and does not drop memory? Is there a way handle this behaviour, like change nGram size or directoryFactory?
I'm pretty new to Solr and Tomcat, but here we go:
OS Windows server 2008
4 Cpu
8 GB Ram
Tomcat Service version 7.0 (64 bit)
Only running Solr
No optional JVM parameters set, but Solr config through GUI
Solr version 4.5.0.
One Core instance (both for querying and indexing)
Schema config:
minGramSize="2" maxGramSize="20"
most of the fields are stored = "true" (required)
Solr config:
ramBufferSizeMB: 100
maxIndexingThreads: 8
directoryFactory: MMapDirectory
autocommit: maxdocs 10000, maxtime 15000, opensearcher false
cache (defaults): filtercache initialsize:512 size: 512 autowarm: 0
queryresultcache initialsize:512 size: 512 autowarm: 0
documentcache initialsize:512 size: 512 autowarm: 0
We're using a .Net Service (based on Solr.Net) for updating and inserting documents on a single Solr Core instance. The size of documents sent to Solr vary from 1 Kb up to 8Mb, we're sending the documents in batches, using one or multiple threads. The current size of the Solr Index is about 15GB.
The indexing service is running around 3 a 4 hours to complete all inserts and updates to Solr. While the indexing process is running the Tomcat process memory usage keeps growing up to > 7GB Ram and does not reduce, even after 24 hours.
After a restart of Tomcat, or a Reload Core in the Solr Admin the memory drops back to 1 a 2 GB Ram. Memory leak?
Is it possible to configure the max memory usage for the Solr process on Tomcat?
Are there other alternatives? Best practices?
Thanks
You can setup JVM memory setting on tomcat. I usually do this with setenv.bat file in bin directory of tomcat (same directory as the catalina.bat/.sh files are).
Adjust following values as per your needs:
set JAVA_OPTS=%JAVA_OPTS% -Xms256m -Xmx512m"
Here are clear instruction on it:
http://wiki.razuna.com/display/ecp/Adjusting+Memory+Settings+for+Tomcat
At first you have to set XMX parameter to limit maximum memory that can be used by Tomcat. But in case of SOLR you have to remember that it uses a lot of memory outside of JVM to handle filesystem buffers. So never use more than 50% of available memory for Tomcat in this case.
I have the following setup (albeit a much smaller problem)...
5000 documents, document sizes range from 1MB to 30MB.
We have a requirement to run under 1GB for the Tomcat process on a 2 CPU / 2GB system
After bit of experimentation I came up with these settings for JAVA.
-Xms448m
-Xmx768m
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:ParallelCMSThreads=4
-XX:PermSize=64m
-XX:MaxPermSize=64m
-XX:NewSize=384m
-XX:MaxNewSize=384m
-XX:TargetSurvivorRatio=90
-XX:SurvivorRatio=6
-XX:+CMSParallelRemarkEnabled
-XX:CMSInitiatingOccupancyFraction=55
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+OptimizeStringConcat
-XX:+UseCompressedOops
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=5
These helped but I encountered issues with the OutOfMemory and Tomcat using too much memory even with such a small dataset.
Solution Or things/configuration I have set so far that seem to hold well are as follows:
Disable all caches other than QueryResultCache
Do not include text/content fields in your query only include the id
Do not use row size greater than 10 and do not include highlighting.
If you are using highlighting (this is the biggest culprit), get the document identifiers from the query first and then do the query again with highlighting and the search terms with the id field included.
Finally for the memory problem. I had to grudgingly implement an unorthodox approach to solve the tomcat/java memory hogging issue (as java never gives back memory to the OS).
I created a memory governor service that runs with debug privilege and calls windows API to force tomcat process to release memory. I also have a global mutex to prevent access to tomcat while this happens when a call comes in.
Surprisingly this approach is working out well but not without its own perils if you do not have the option to control access to Tomcat.
If you find a better solution/configuration changes please let us know.

How Java Program can be tuned or worked directly with available RAM (32 GB) and muti-core CPU processor

I have a high end machine with 32 GB and I want to effectively utilize the available RAM for Java Process. Practically, I have seen I can run at max 3 JBoss instances (or Java Process) with Max 3GB Heap Size in one box still I have 20GB free space not utilized. Is it possible to create Java Program that can effectively directly work with RAM like C++, not relying on Java Memory Model for creating objects on JVM and also not relying on GC to reclaim memory. I mean Java Program directly working with RAM

JBoss 6 Heap Size goes out of memory

On startup of Jboss EAP 6 server, because of static caching the heap size increases to more than 4096M while same application hosted on Jboss 5 GA heap size does not exceed 2000M.
I am using following VM arguments to boot the server.
-server -Xms1024M -Xmx4096M -XX:MaxPermSize=1024M -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
Is there a different GC strategy involved in getting Jboss 6 Heap 6 increase.
Thanks
You can check the GC algorithm using jmap -heap of the JVM. But it does not make sense, memory utilization is purely based on the application requirement. If you had 2GB of heap in your previous JBoss version, certainly with same load, and other VM arguments then either your infrastructure would have deployed an limiting factor ( for example thread pool configuration) or your application would have given OOME.
"Is there a different GC strategy involved in getting Jboss 6 Heap 6 increase."
To add a note on above line, JBoss does not decide what GC algorithm should be adopted for your application. Its the Java (JRE) who decides (until and unless you direct it to particular configuration). Java decides based on the server, OS configuration.
JBoss will only comes with default min and max heap and perm size .. rest is all dependent on the Java you are using.

Resources