"OutOfMemoryError: PermGen space" with Grails webapps - grails

Our team has been working with Grails (version 2.3.5) for a little under a year now, and the delivery team managing our servers has little to no experience related to applications written in Grails.
We have several Tomcat 7 instances, both in the test and production environments, with a certain amount of webapps. While some of the instances only containing webapps developed in Java (w/ Spring, Hibernate) sometimes get up to something like 20 contexts with no major issue, it seems like anything past 6 Grails applications (applications much similar to their Java counterparts) starts regularly causing the dreaded PermGen space issue.
The PermGen allocated is currently 536Mb, and the delivery team obviously suggests either using a separate instance for the new applications, or increasing the allocated memory; at the same time they are urging us to verify how these few apps are saturating the memory.
Our impression is that this is normal with Grails apps, but not having any senior Grails developer we have no way to confirm it from experience or better knowledge.
Is 536Mb too little allocated space in PermGen for 8 "regular" Grails webapps?
Update:
To specify what I mean by "regular", these are all couples of front-end + back-end for different services, where the front-end has nothing much more than a list of requests, a wizard to go from zero to a completed request, validates data, persists it, calls a webservice to get a protocol number, and in a couple cases calls an external payment gateway.
The back-end is used to manage requests and performs similar operations.
Every app has maybe around 20 entities with respective controllers, services, and views, and on top of that we have a few classes to handle security w/ Spring Security and an external infrastructure.

That's how it is. You have basically two options.
Migrate on Java 8 (see http://www.infoq.com/articles/Java-PERMGEN-Removed)
Increase the PermGen space further.
And a quick background. Unlike regular Java with Spring, Groovy and Grails generate quite a lot of classes in the runtime (GSPs being one example). Groovy also generate huge amount of classes itself - each closure is a class. All this put pressure on the permgen.
To ease off the pressure get rid of all unnecessary plugins, consolidate GSPs, rethink closures, use AOP only when absolutely needed etc.

We used to have similar problems, so our team started using one tomcat per app. We also separated credentials from security purposes. Now it's easier to manage them, monitor logs and make periodical updates.
Hint: it's easier (imho and cleaner) to train your admin in creating users, home_dirs with tomcats instances and just providing credentials.

Related

Grails: detemine number of calls to vaious API Services

I have this working grails app, and i want to determine how many calls various parts of it's API gets per time unit.
This app offers 100+ web services, and 10k+ open connections at any time, so it is pretty busy.
It's written in grails 1.3.6 (developed over 10 years).
My overall target is to gain knowledge about which interfaces are actually used and how often.
I see several poor possibilities:
Log everything, parse the logs and 'grep -c'.... (Bad idea)
tcpdump... (Bad idea)
Increment global variables (thread safe) where i want to instrument it (least bad idea)
How do i get grails to tell me which services are called and how often?
...without killing performance?
Any ideas/pointers will be appreciated :-)

Maximum concurrent users for JSF2?

I am working on a Java Web Application, using the following frameworks : Spring 3.1, JSF 2.1.26 and RichFaces 4.3.3.
The whole app is running in the Amazon Cloud under medium.m1 instances (2 - 2.4 Ghz single core), with a Tomcat 7.
My customer asked questions about the performances of the web application, and about the number of concurrent users that can be handled on the same server.
He gave me a report showing that a servlet with about the same hardware than the Amazon medium one is able to serve about 1000 requests per seconds (40KB page) :
https://www.webperformance.com/library/reports/windows_vs_linux_part1/
I took a classic page with header/footer, data table, sort/search/filters/data scroller... (80KB). I removed the database, the filters (security, etc., except JSF one) and kept 20 visible rows. Without any load, that page takes about 300 ms to be loaded.
When I executed the load test for my application, I realize that it can only serve 20 requests per second before the request/response time exceeds the 1000 ms.
Can you tell me if this is a normal behavior?
I can understand a JSF page is longer to build than a simple servlet one, but not being able to serve more than 20 requests, while the servlet can serve 1 000 is puzzling.
Is there any standard benchmark for a typical JSF application?
If you think I have optimization problem, can you tell me where I can search?
Thanks in advance for your answer!
In my personal opinion, you should take a look first at this article at JSF Central: Understanding JSF 2.0 Performance – Part 3 The code can be found On Github including the war files used in the comparison. There you can find a simple web application using JSF and the same application implemented in different web frameworks (Spring MVC with JSP or Thymeleaf, Tapestry, Wicket, Grails or plain Servlet with JSP).
The demo app has a simple stack using an in-memory database (HSQLDB) and JPA, so I think it should be pretty simple to deploy them on Amazon Cloud. That can give you a starting point about what can you expect from that environment and how you should set up your environment properly. Remember that in a complex system there are many elements that impacts performance, so you should evaluate later things like which params your persistence layer has and so on.
For JSF, it is known that Apache MyFaces will give you the best possible performance in all aspects, so if you can you should try with RichFaces + MyFaces combination.

Multiple redmine instances best practices

I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).
Until now I have 2 options:
Deploy a redmine instance for each group
Deploy one redmine instance with multiple database
I really don't know what is the best practice in this situation, I've seen some people doing this in both ways.
I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.
The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).
There are any good practice for this situation? What's the best practice to take in this situation?
Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).
Thanks for any advice and sorry for my english!
Edit:
My solution:
I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.
In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.
Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.
But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
Running multiple instances from the same codebase is not officially supported by Redmine. However, Debian/Ubuntu packages seem to support such approach... See:
Multiple instances of redmine on Debian squeeze
So, generally:
If you use Debian/Ubuntu go with option #2
Otherwise go with #1
Rolling forward a couple of years, and you might now want to consider a third option of using docker containers for each of your redmine instances.
I've been using https://github.com/sameersbn/docker-redmine.git , and have been quite happy with it except that it doesn't yet support handling of incoming mail for creating and commenting on tickets.

How to reduce the use of PermGen space in Grails

in my Grails application using the Spring Security Core plugin for authentication. I am facing a serious problem with that because my application took 21 seconds to lift the Tomcat was carrying 43/2 after installation.
So far so good, but began to occur error 'PermGen Error' memory error Tomcat server. Before it was 64 and Aug is 256 so that the error does not crash my app so often.
I wonder whether you know some plugin configuration in order to reduce the incidence of this error or some method to effect the release of this cache because the number of users is increasing and if you can not solve it unfortunately have to leave the plugin I seems to be an excellent choice for application security.
Someone could tell me if the amount of plugins used in an application interference has this memory?
PermGen is a part of memory to store the static components of your app, mostly classes. Literally it will not be affected by either the amount of users or logs associated with user activities, which consumes heap space instead.
To reduce PermGen storage, you have to check your code, redesign those algorithms which contains unnecessary/redundant objects and operations, and consolidate variables and functions if possible. Generally speaking, simplified code will produce smaller executable files. That's the way you save the PermGen space.
Some versions of Tomcat permgen more than others. There was a minor version in the 6 line that I couldn't every get to reliably stay running. And even with the latest versions you still need to tweak your memory settings. I use the following and it works best for me. I still get them now and again, especially if I'm doing a lot of runtime compiling. In production, it is a non-issue because all the development overhead of grails isn't there.
-XX:MaxPermSize=512m -XX:PermSize=512m -Xms256m -Xmx1024m

Correct way to implement standalone Grails batch processes?

I want to implement the following:
Web application in Grails going to MongoDB database
Long-running batch processes populating and updating that database in the background
I would like for both of them to reuse the same Grails services and same GORM domain classes (using mongodb plugin for Grails).
For the Web application everything should work fine, including the dynamic GORM finder methods.
But I cannot figure out how to implement the batch processes.
a. If I implement them as Grails service methods, their long-running nature will be a problem. Even wrapping them in some async executors will unnecessarily complicate everything, as I'd like them each to be a separate Java process so they can be monitored and stopped easily and separately.
b. If I implement them as src/groovy scripts and try to launch from command line, I cannot inject the Grails services properly (ApplicationHolder method throws NPE) or get the GORM finder methods to work. The standalone GORM guides all have Hibernate in mind and overall it seems not the right route to pursue.
c. I considered the 'batch-launcher' Grails plugin but it failed to install and seems a bit abandoned.
d. I considered the 'run-script' Grails command to run the scripts from src/groovy and it seems it might actually work in development, but seems not the right thing to do in production.
I cannot be the only person with such a problem - so how is it generally solved?
How do people run standalone scripts sharing the code base and DB with their Grails applications?
Since you want the jobs processing to be in a separate JVM from your front-end application, the easiest way to do that is to have two instances of Grails running, one for the front-end that serves web requests, and the other to deal with job processing.
Thankfully, the rich ecosystem of plugins for Grails makes this sort of thing quite easy, though perhaps not the most efficient, since running an entire Grails application just for processing is a bit overkill.
The way I tend to go about it is to write my application as one app, with services that take care of the job processing. These services are tied to the RabbitMQ plugin, so the general flow is that the web requests (or quartz scheduled jobs) put jobs into a work queue, and then the worker services take care of processing them.
The advantage with this is that, since it's one application, I have full access to all of the domain objects, etc., and I can leverage the dissconnected nature of a message queue to scale out my front- and back-ends seperately without needing more than one application. Instead, I can just install the same application multiple times and configure the number of threads dedicated to processing jobs and/or the queues that the job processors are looking at.
So, with this setup, for development, I will usually just set the number of job processing threads to whatever makes sense for the development work I'm doing, and then just a simple grails run-app, and I have a fully functional system (assuming I have a RabbitMQ server running to use as well).
Then, when I go to deploy into production, I deploy 2 instances of the application, one for the front-end work and the other for the back-end work. I just configure the front-end instance to have 1 or 0 threads for processing jobs, and the back-end instance I give many more threads. This lets me update either portion as needed or spin up more instances if I need to scale one part or the other.
I'm sure there are other ways to do this, but I've found this to be both really easy to develop (since it's all one application), and also really easy to deploy, scale, and maintain.

Resources