Tomcat per webapp memory settings - memory

I am having two webapplication running inside tomcat. Java Heap space is allocated for Tomcat and it is shared for both appliaction. In that one application consumes more and other is getting OUT_OF_MEMORY.
Is there any way to set memory settings per web application. Say 70% for one webapp and 30% for other from the overall memory allocated to Tomcat.
Regards
Ganesh

The memory is defined per JVM instance, so if you are using one tomcat you cannot do it.
However you can run two tomcat instances - one per web application - and then you will finer control on memory allocation for each webapp.

No. There is no way for some portion of java code to control the consumption of memory by other portion of code called from first portion of code. In other words, the web container is just a java program which calls some other java class methods found in application.
So the only control one has is JVM arguments. And this arguments are only to hint the JVM where approximately to fail with out of memory error. No, it is not possible.

Related

Memory usage of keycloak docker container

When we starts the keycloak container, it uses almost 700 MB memory right away. I was not able to find more details on how and where it is using this much memory. I have couple of questions below.
Is there a way to find more details about which processes are taking
more memory inside the container? I was looking into the file
/sys/fs/cgroup/memory/memory.stat inside the container which didn't give much info.
Is it normal for the keycloak container to use this much memory? Or we need
to do any tweaking in the configuration file for better performance.
I would also appreciate if anyone has more findings which can be leverage to improve overall performance of the application.
Keycloak is Java app, so you need to understand Java/Java VM memory footprint first: What is the memory footprint of the JVM and how can I minimize it?
If you want to analyze Java memory usage, then Java VisualVM is a good starting point.
700MB for Keycloak memory is normal. There is initiative to move Keycloak to Quarkus (https://www.keycloak.org/2020/12/first-keycloak-x-release.adoc), which will reduce also memory footprint - it is still in the preview, not generally available.
In theory you can switch to different runtime (e.g. GraalVM), but then you may have different issues - it isn't officialy supported setup.
IMHO: it'll be overengineering if you want to optimize your Keycloak memory usage; it is a Java app

ls -Xmx memory limit for Intellij IDEA shared between open project windows?

Intellij IDEA memory settings may be customized by editing -Xmx option at idea64.vmoptions. Let's say, I set
-Xmx2g
and then open 5 different projects at the same time. Can IDEA eventually consume up to ~10gb of memory, or it will be limited to 2gb+some overhead?
At memory usage monitor in the lower right corner of IDEA window, I see the different value of allocated memory for each project. On the other hand, these values seem to correlate over time.
It's the same single process running in the same JVM, so the limit is for all the windows/projects.

Solr on Tomcat, Windows OS consumes all memory

Update
I've configured both xms (initial memory) and xmx (maximum memory allocation jvm paramters, after a restart i've hooked up Visual VM to monitor the Tomcat memory usage. While the indexing process is running, the memory usage of Tomcat seems ok, memory consumption is in range of defined jvm params. (see image)
So it seems that filesystem buffers are consuming all the leftover memory, and does not drop memory? Is there a way handle this behaviour, like change nGram size or directoryFactory?
I'm pretty new to Solr and Tomcat, but here we go:
OS Windows server 2008
4 Cpu
8 GB Ram
Tomcat Service version 7.0 (64 bit)
Only running Solr
No optional JVM parameters set, but Solr config through GUI
Solr version 4.5.0.
One Core instance (both for querying and indexing)
Schema config:
minGramSize="2" maxGramSize="20"
most of the fields are stored = "true" (required)
Solr config:
ramBufferSizeMB: 100
maxIndexingThreads: 8
directoryFactory: MMapDirectory
autocommit: maxdocs 10000, maxtime 15000, opensearcher false
cache (defaults): filtercache initialsize:512 size: 512 autowarm: 0
queryresultcache initialsize:512 size: 512 autowarm: 0
documentcache initialsize:512 size: 512 autowarm: 0
We're using a .Net Service (based on Solr.Net) for updating and inserting documents on a single Solr Core instance. The size of documents sent to Solr vary from 1 Kb up to 8Mb, we're sending the documents in batches, using one or multiple threads. The current size of the Solr Index is about 15GB.
The indexing service is running around 3 a 4 hours to complete all inserts and updates to Solr. While the indexing process is running the Tomcat process memory usage keeps growing up to > 7GB Ram and does not reduce, even after 24 hours.
After a restart of Tomcat, or a Reload Core in the Solr Admin the memory drops back to 1 a 2 GB Ram. Memory leak?
Is it possible to configure the max memory usage for the Solr process on Tomcat?
Are there other alternatives? Best practices?
Thanks
You can setup JVM memory setting on tomcat. I usually do this with setenv.bat file in bin directory of tomcat (same directory as the catalina.bat/.sh files are).
Adjust following values as per your needs:
set JAVA_OPTS=%JAVA_OPTS% -Xms256m -Xmx512m"
Here are clear instruction on it:
http://wiki.razuna.com/display/ecp/Adjusting+Memory+Settings+for+Tomcat
At first you have to set XMX parameter to limit maximum memory that can be used by Tomcat. But in case of SOLR you have to remember that it uses a lot of memory outside of JVM to handle filesystem buffers. So never use more than 50% of available memory for Tomcat in this case.
I have the following setup (albeit a much smaller problem)...
5000 documents, document sizes range from 1MB to 30MB.
We have a requirement to run under 1GB for the Tomcat process on a 2 CPU / 2GB system
After bit of experimentation I came up with these settings for JAVA.
-Xms448m
-Xmx768m
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:ParallelCMSThreads=4
-XX:PermSize=64m
-XX:MaxPermSize=64m
-XX:NewSize=384m
-XX:MaxNewSize=384m
-XX:TargetSurvivorRatio=90
-XX:SurvivorRatio=6
-XX:+CMSParallelRemarkEnabled
-XX:CMSInitiatingOccupancyFraction=55
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+OptimizeStringConcat
-XX:+UseCompressedOops
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=5
These helped but I encountered issues with the OutOfMemory and Tomcat using too much memory even with such a small dataset.
Solution Or things/configuration I have set so far that seem to hold well are as follows:
Disable all caches other than QueryResultCache
Do not include text/content fields in your query only include the id
Do not use row size greater than 10 and do not include highlighting.
If you are using highlighting (this is the biggest culprit), get the document identifiers from the query first and then do the query again with highlighting and the search terms with the id field included.
Finally for the memory problem. I had to grudgingly implement an unorthodox approach to solve the tomcat/java memory hogging issue (as java never gives back memory to the OS).
I created a memory governor service that runs with debug privilege and calls windows API to force tomcat process to release memory. I also have a global mutex to prevent access to tomcat while this happens when a call comes in.
Surprisingly this approach is working out well but not without its own perils if you do not have the option to control access to Tomcat.
If you find a better solution/configuration changes please let us know.

memory usage of grails application

I want to know about how much memory can a grails application use.
Does it depend on the number of domain classes, plugins installed?
I am develpoing an application and when I test it on tomcat it continuously goes out of memory.
currently using almost 500MB.
Moreover if this is not the case, can you suggest me what methods of memory management can be used?
Have a look at the Grails Java Melody plugin. It will give you all kinds of statistics on your running app, in your case the memory stats should help but there is so much more to this Swiss Army knife of app monitoring.
Initially have you tried playing with JVM settings eg: -XX:PermSize=256m -XX:MaxPermSize=1g -Xms256m -Xmx512m
Java, and Groovy, and Grails have very high level of minimal usage of memory. 500Mb is really small amount, and it's pretty common to start from 1Gb (I mean tomcat memory configuration). So, don't worry about 500Mb, it's ok.
As about domains, classes, etc - of course all new classes, all new code, it requires some memory, but i'm sure that for your case it's just a few percents of memory, all other is used by Grails libraries, Tomcat, JVM, etc.
PS there is also common Tomcat problem with PermGem space - https://stackoverflow.com/search?q=permgem+space

make full use of 24G memory for jboss

we have a solaris sparc 64 bit running the jboss. it has 24G mem. but because of JVM limitation, i can only set to JAVA_OPTS="-server -Xms256m -Xmx3600m -XX:MaxPermSize=3600m".
i don't know the exactly cap. but if i set to 4000m, java won't like it.
is there any way to use this 24G mem fully or more efficiently?
if i use cluster on one machine, is it stable? it need rewrite some part of code, i heard.
All 32-bit processes are limited to 4 gigabytes of addressable memory. 2^32 == 4 gibi.
If you can run jboss as a 64-bit process (usually just adding "-d64" to JAVA_OPTS), then you can tune jboss with increased thread and object pools to make use of that memory. As others have mentioned, you might have horrific garbage collection pauses, but you may be able to figure out how to avoid those with some load testing and the right choice of garbage collection algorithms.
If you have to run as a 32-bit process, then yes, you can at least run multiple instances of jboss on the same server. In your case, there's three options: zones, virtual interfaces, and port binding.
Solaris Zones
Since you're running solaris, it's relatively easy to create virtual servers ("non-global zones") and install jboss in each one just like you would the real server.
Multi-homing
Configure an extra IP address for each jboss instance you want to run (usually by adding virtual interfaces, but you could also install extra NICs) and bind each instance of jboss to its own IP address with the "-b" startup option.
Service Binding Manager
This is the most complicated / brittle option, but at least it requires no OS changes.
Whether or not to actually configure the jboss instances as a cluster depends on your application. One benefit is the ability to use http session failover. One downside is cluster merge issues if your application is unstable or tends to become unresponsive for periods of time.
You mention your application is slow; before you go too far down any of these paths, try to understand where your bottlenecks are. Use jvisualvm or jstat to observe if you're doing frequent garbage collections. If you're not, then adding heap memory or extra jboss instances may not resolve your performance woes.
you can't use the full physical memory, JVM requires max contined memory trunck, try use java -Xmxnnnnm -version to test the max available memory on your box.

Resources