PermGen problems with Lift and Jetty - memory

I'm developing on the standard Lift platform (maven and jetty). I'm repeatedly (once every couple of days) getting this:
Exception in thread "7048009#qtp-3179125-12" java.lang.OutOfMemoryError: PermGen space
2009-09-15 19:41:38.629::WARN: handle failed
java.lang.OutOfMemoryError: PermGen space
This is in my dev environment. It's not a problem because I can keep restarting the server. In deployment I'm not having these problems so it's not a real issue. I'm just curious.
I don't know too much about the JVM. I think I'm correct in thinking that permanent generation memory is for things like classes and interned strings? What I remember is a bit mixed up with the .NET memory model...
Any reason why this is happening? Are the defaults just crazily low? Is it to do with all the auxiliary objects that Scala has to create for Function objects and similar FP things? Every time I restart Jetty with newly written code (every few minutes) I imagine it re-loads classes etc. But even so, it cant' be that many can it? And shouldn't the JVM be able to deal with a large number of classes?
Cheers
Joe

From this post:
This exception occurred for one simple reason :
the permgenspace is where class properties, such as methods, fields, annotations, and also static variables, etc. are stored in the Java VM, but this space has the particularity to not being cleaned by the garbage collector.
So if your webapp uses or creates a lot of classes (I’m thinking dynamic generations of classes), chances are you met this problem.
Here are some solutions that helped me get rid of this exception :
-XX:+CMSClassUnloadingEnabled : this setting enables garbage collection in the permgenspace
-XX:+CMSPermGenSweepingEnabled : allows the garbage collector to remove even classes from the memory
-XX:PermSize=64M -XX:MaxPermSize=128M : raises the amount of memory allocated to the permgenspace
May be this could help.
Edit July 2012 (almost 3 years later):
Ondra Žižka comments (and I have updated the answer above):
JVM 1.6.0_27 says: Please use:
CMSClassUnloadingEnabled (Whether class unloading enabled when using CMS GC)
in place of CMSPermGenSweepingEnabled in the future
See the full Hotspot JVM Options - The complete reference for mroe.

If you see this when running mvn jetty:run,
set the MAVEN_OPTS.
For Linux:
export MAVEN_OPTS="-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
For Windows:
set "MAVEN_OPTS=-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
Should be fine now. If not, increase -XX:MaxPermSize.
You can also put these permanently to your environment.
For Linux, append the export line to ~/.bashrc
For Windows, press Win-key + PrintScreen, and go Advanced > Environment.
See also http://support.microsoft.com/kb/310519.

This is because of the reloading of classes as you suggested. If you are using lots of libraries etc. the sum of classes will rapidly grow for each restart. Try monitoring your jetty instance with VisualVM to get an overview of memory consumption when reloading.

The mailing list (http://groups.google.com/group/liftweb/) is the official support forum for Lift, and where you'll be able to get a better answer. I don't know the particulars of your dev setup (you don't go into much detail), but I assume you're reloading your war in Jetty without actually restarting it. Lift doesn't perform dynamic class generation (as suggested by VonC above), but Scala compiles each closure as a separate class. If you're adding and removing closures to your code over the course of several days, it's possible that too many classes are being loaded and never unloaded and taking up perm space. I'd suggest you enable the options JVM options mentioned by VonC above and see if they help.

The permanent generation is where the JVM puts stuff that will probably not be (garbage) collected like custom classloaders.
Depending on what you are deploying, the perm gen setting can be low. Some applications and/or containers combination do contain some memory leaks, so when an app gets undeployed sometimes some stuff like class loaders are not collected, resulting in filling the Perm Space thus generating the error you are having.
Unfortunately, currently the best option in this case is to max up the perm space with the following jvm flag (example for 192m perm size):
-XX:MaxPermSize=192M (or 256M)
The other option is to make sure that either the container or the framework do not leak memory.

Related

Rascal MPL how to increase heap size?

I just stumbled upon this old post that mentions Java heap space and how to change its parameters in Eclipse (in the eclipse.ini file). How do I do set these in the new VSCode environment?
The Eclipse environment is quite different from the VScode environment. Everything runs in a single JVM there, and so it was difficult for plugin writers to programmatically increase the heap size. This led to writing manual pages on the topic.
In VScode we have a JVM for every Rascal process:
for every terminal REPL there is one JVM
for the Rascal LSP server there is one JVM
for the generic DSL-parametrized LSP server there is one JVM
there is one JVM for every deployed DSL server written in Rascal
And the extension code starts these JVMs, so we can control how much memory they receive. The latest release does this by assessing the total available memory on your machine and allocating a sumptuous amount for every process.
And so there is no configuration option for the user anymore, as we had to add for the Eclipse situation.

Not sure how to resolve outofmemory issue on jenkins server?

My Jenkins server keeps crashing, so I generated a heap dump which I then put through visualVM. It shows most of the memory is being used up by the class java.util.concurrent.concurrenhashmapnode.
My understanding is loads objects are being referenced, which are unable to be GC'd. As a result, most the memory is being used up by this. Any idea how to resolve this? I'm new to system admin stuff, so not the most technically proficient sorry.
TIA
I've recently came across OutOfMemoryError which crashed my Jenkins every 2 days. It was due to Ldap bug in old version of java: Ldap Error and java fixed versions matrix
In my case updating java fixed the problem.
Anyway to investigate OutOfMemoryError I did as follows:
restarted jenkins after crash,
took incremental thread dumps every half an hour, can be taken from <jenkinsUrl>/threadDump,
comparing thread dumps pointed me to memory leak on ldap threads.
In general I'd also suggest to:
update: java, Jenkins and it's plugins, other problematic tools,
investigate Jenkins logs, dumps, profile heap (what you already did).

Basics of Jmapping?

I've done some search out there but couldn't find too much really helpful info on it, but could someone try to explain the basic of Java memory maps? Like where/how to use it, it's purpose, and maybe some syntax examples (inputs/outputs types)? I'm taking a Java test soon and this could be one of the topics, but through all of my tutorials Jmap has not come up. Thanks in advance
Edit: I'm referring to the tool: jmap
I would read the man page you have referenced.
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
NOTE: This utility is unsupported and may or may not be available in future versions of the JDK. In Windows Systems where dbgeng.dll is not present, 'Debugging Tools For Windows' needs to be installed to have these tools working. Also, PATH environment variable should contain the location of jvm.dll used by the target process or the location from which the Crash Dump file was produced.
http://docs.oracle.com/javase/7/docs/technotes/tools/share/jmap.html
Its not a tool to be played with lightly. You need a good profiler which can read it output as jhat is only useful for trivial programs. (YourKit works just fine for 1+ GB heaps)

Set Java Application's virtual machine max memory without access to VM parameters because of custom launcher?

I'm using a Java application which allows you to import custom files. On import, these files are loaded into memory.
The problem is that the files I want to import are very big, this causes an OutOfMemory exception. The crash log also informs me that the VM was started with the java parameter "-Xmx512m", I want to alter this to "-Xmx1024m" so that I got double the memory available.
The problem is that this application is using it's own JRE folder and that there's a launcher written in C which is calling the jvm.dll file. In any way, java.exe or javaw.exe are never called and thus I cannot set these parameters myself (if I delete these executables it doesn't matter, can still run the application - this is not the case with the dll).
So, my question is, can I set this VM parameter in an other way? I'm even willing to alter the JRE files if there is no other way.
Update: Found some extra info:
jvm_args: -Djava.system.class.loader=com.company.loader.NativeClassLoader -Xmx160m -Xms160m -Xincgc
java_command: unknown
Launcher Type: generic
You would probably be better off attempting to eliminate the launcher use a standard JVM. See if you can figure out what parameters Java is being launched with--it might help to dump the launcher and any associated configuration files.
Then you just call java yourself.
This may not work at all depending on what else the launcher is doing.
edit:
try:
java -Djava.system.class.loader=com.company.loader.NativeClassLoader -Xmx160m -Xms160m -Xincgc
from the command line against a real JVM. There is a good chance it will fail because of the NativeClassLoader or other stuff set up by the java loader.
Also you may be missing the actual java class it is trying to start (I don't know if that "NativeClassLoader" needs the actual main class or not).
Without knowing more about the C class loader, I don't know if anyone can help you much. Perhaps you could contact the vendor? You might dump the .exe file and see if there is any identifying text--if you could figure out where it came from, you might be able to find docs on it telling you how to forward parameters to the JVM

Grails startup is slow

Help! I'm porting a large ruby app to Grails - but the Grails startup of my application takes more than 2 minutes.
I've already set dbCreate to "read" I've ensured my high end dual processor desktop windows box gives Grails needed RAM (1 Gig). I have no plugins installed. I have 170 domain classes that used to be ruby classes.
When it starts up it prints out the line "Running Grails App.." and then hangs for a long time before it then prints out the "Server running" line.
I just did something where I migrated all my ids to bigints. That seems to have worsened the problem. Now it takes about 10 minutes to startup.
I am new to grails would you please give me a few more details on what and where to log the events at startup? As to profiling the vm, its been a few years since I did a lot of Java. What do you recommend as the best profiling tool to use now?
What else can I do to speed up Grails startup?
Unfortunately, I am not sure too much can be done beyond what you already did. As you know, there is a lot going on when it starts up, with all the plugin resolution / loading, adding dynamic methods to your domain objects, and overall dynamic nature of Groovy.
I am not sure which version you are using, but I've asked for ability to turn off dependency checking when you start up in 1.2, since that adds a bunch of time to startup time as well.
I realize above isn't too helpful, so perhaps this will be: I split up my application into several plugins. One for domain objects, one for graphing capability, one for excel import, another for some UI constructs I needed. I didn't do it just because of slow startup times, but the advantage is that I can test parts of the system separately from each other before integrating everything together.
I am about to add a piece of new functionality that involves at least 10 new domain objects, and I am first developing them in a separate plugin by having stubs for the few objects they have to interact with from the core app. That allows me to both reduce startup times, and also have my code better isolated.
So if it's an option for you, try to separate out things so you can work on them separately, which will alleviate your issue somewhat. There may also be other benefits in terms of having your team work on smaller components separately, better modularization, etc.
Hope this is helpful.
170 domain classes is fairly large, but 2 minutes still seems really long to me. Do you have a ton of plugins installed? Potentially too verbose debug settings?
I'd be curious how long it took if you created a fresh grails app, copied in all of your domain objects (and the subset of plugins that the domain objects might need to actually operate) and see how long that takes to start.
Jean's suggestion about separating things out if possible is a good one. I've done something similar on previous projects where we have a domain plugin, and our other apps all rely on that domain plugin.
You could also use the grails events to log some timing information on start up to see where your bottlenecks are. Timing the "PluginInstalled" event should be good as I think that the hibernate plugin would be caught by this in addition to the other plugins.
You may have a dependency problem. If a plugin you use relies on a library in maven that has 'open ended' dependencies, grails will go and look each time if there are newer versions to download in the range. I have no idea why anyone would specify it like this. It seems it would lead to unreliable behaviour. For me, the culprit is Amazon's java aws library, naturally used by a plugin that talks to Amazon's cloud.
http://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk/1.2.10
note how some of its dependencies are like this
org.apache.httpcomponents httpclient [4.1, 5.0)
it appears that every time, grails is looking for a newer version (and downloading if it exists, I just noticed 4.2-alpha1 of httpclient come down when I ran this time).
By removing that dependency from the plugin and manually adding the required libraries to my .lib folder, I reduced my startup time from >30sec to <1sec
You might want to see if there are other knobs you can turn other than Grails in order to fix this.
Have you tried approaching this as a performance issue? You can a look at the box performance and try to find out what the bottleneck is. Is it CPU? Is it a disk read issue? Can you attach a profiler to the VM and find out what's using up most of your startup time?
Have you tried basics like these for further deployment to a servlet container of your choice or in-place .war bootstrapping?
grails -Ddisable.auto.recompile=true run-app
grails run-war
grails war

Resources