Performance problems jasper reports and grails/groovy - grails

I am expericencing heavy performance problems with generating PDFs while using Jasper Reports in my grails application. I am invoking the jasperService:
def reportDef = jasperService.buildReportDefinition(parameter, LocaleContextHolder.getLocale(), [data: emptyData])
Running in Jboss several times, performance is good. After X hours, performance is 100+ times worse than after the start of Jboss... Response time is changing from 7-12 seconds to several minutes for creating a PDF with one single page. I am sure, that the performance lag is within this invocation, because I have added time measurements around it. As the report data is passed within the parameters, I can exclude also data base connection issues.
I have analyzed the HEAP, but it is used ~50% and not changing much during PDF creation. Overall memory is also not fully used.
I have analyzed the PermGen, but it is also far from being full.
The CPU ist permanently at 100% during creation, which is ok, knowing that PDF creation is very CPU consuming. I have ensured that no other process is holding the PDF creation up, 1st by restarting the process several times and measuring no difference, so I can exclude external interruption and 2nd) knowing that performance is much better if JBoss is restarted.
Due to the facts, I have started to analyze the JBoss itself by analyzing the Thread dumps while running the PDF creation thread. I see that nothing else is running (except the thread dumping thread), neither when it is slow nor fast after restart. I can just see that in several Thread dumps Groovy is making several AST transformations which is not strange for Groovy...
Now, I am despaired. HEAP/PermGen is ok, CPU is ok. What the hell is Jasper Reports / Grails doing?
Maybe someone has made similar experiences or an idea for the root cause? Is there something which needs/should to be cleaned up in Jasper Reports?
EDIT: My further analysis yield to the unproofed but certain outcome that JBoss 7.1.1 (latest stable) is the root cause. After installing the app on a Tomcat, everything runs smoothly, also after several days. I'll keep this open. Maybe someone has made same experience and likes to post it...? Otherwise, I will close it with this solution. I will maybe test my app on earlier versions of Jboss or 7.2/7.3.

The solution was that we haven't perceived that JBoss was partially ignoring our Log4J configuration and was massively logging into the server.log which we were not monitoring. Jasper and Grails plugins were writing dozens of MB for each PDF generation into the log file. After removing these log inserts, performance was good again.

Related

How can I debug high cpu usage in electron?

I'm writing an Electron app and a few builds back testers started noticing that two electron.exe processes were consuming a lot of CPU time all the time. One pegging a CPU core and the other using about 85% of a core.
I'm certain that this was not always the case as builds several months ago didn't do this. But I'm at a loss to know how to debug what code changes may have introduced this as the code base has evolved dramatically over that time.
process.getIOCounters() reports that several gigabytes of IO is occurring every few minutes. The application is not deadlocked and everything still works it is just chewing through CPU. It happens anytime the app is open even if it is in the background without any user input. I only have windows 10 x64 systems that I've deployed this to as Electron 1.7.9 and also 1.7.5.
Based on the behavior I'm certain that this IO is interprocess communication between the render and main threads, but I'm not manually performing any IPC. I think this problem is being caused by some module we've introduced that improperly resides in the rendered thread.
My question, how does one debug the The Electron render/main thread IPC pipe? Can it be hooked to know what the contents of the gigabytes of traffic are?
Based on the past few days of attempting to debug this I've answer the question for myself:
My question, how does one debug the The Electron render/main thread IPC pipe?
Don't, electron seemed like a good idea, writing all your client and platform code in the same place. But there are a lot of catches, and out of the blue libraries will have strange bugs that are costly to address because they are outside the main stream use case. This certainly has a lot to do with me not being an Electron Expert, but in the real world there are deadlines and timelines and I can't always get up to speed as much as I would like to.
I've updated my architecture to the tried an true Service/GUI model. I'll be maintaining full browser support for the client code as well as an Electron mode with hooks for some features when electron is detected.
This allows me to quickly identify issues that are specific to browser, version or platform framework. It also lets me use which ever version of NodeJS that I would like to for the service which has also been an issue in my case.
I still love Electron though, I'm just going to be more careful as I use it. If I do discover the specifics of why I had this problem I'll check back and report those details.
Update
So this issue was not directly related to Electron like I had supposed, the IPC was not between the renderer and main threads and was a red herring. It was actually a chrome key frame animation issue which was causing a 60 FPS redraw rate, still not sure why this caused GBs of IPC, but whatever. See https://github.com/Microsoft/vscode/issues/22900
I was able to discover this by porting this app back to native browser ( with nodejs service ). I then ran in chrome, edge and firefox. Only chrome behaved this way.

Process Monitor > Name Not Found for I/O > Grails Application

I'm trying to speedup the startup of a grails 2.3.7 application.
Part of this has been to move stuff over to a RamDrive and start the project and inteli-j from there.
I have noticed though that grails tries to read many files, and in many cases these files are not there or the path is not there.
It seems very hectic and disorganized.
Does anyone have any idea how to improve and avoid these redundant and inefficient system calls as well as how to speedup startup?
Is it a matter of grails itself or the specific plugins being included?
Picture available at screenshot as well.
Additionally, below please find the graphs for the various operations performed during startup. Unfortunately the CPU usage is never up to 100% meaning grails startup may not be optimized to use all cores.

JBoss 7.1 - Constant heap increase

I've been tasked with analyzing some memory consumption issues with one of our web apps. I'd made myself passibly familiar with tools like Mission Control and VisualVM and used them to resolve a number of leaks, but in doing so came across behavior for which I can't account.
Setup
JBoss 7.1.1 AS
Java 1.7.0_67
Specifically, I've found that even when I run only JBoss 7 by itself (that is, I turn off the deployer and just let the server itself run) I can see regular allocations (followed by garbage collection) of about 1MB/3 seconds or so.
On a whim, I took heap dumps immediately after doing a GC and then once the allocations had been going on. It seems like the majority of the objects I'm seeing have to do with modules, either Xerces activity (reading the module XML, I guess?) or objects associated with ModuleLoader. The majority of the objects I see all have 'References' that look something like this:
http://i.stack.imgur.com/LlUmv.png (sorry, I can't mark up images)
My thinking (which may be entirely off base) was that JBoss scans for new modules to support hot deploys? The thing is though, that use case isn't one I ever use: new deployments always involve just shutting down the server, so dynamically scanning for modules is really unnecessary.
I guess my questions are:
Does my belief about module loading have any merit?
If so, is there any way to get JBoss to stop scanning?
If not, does anyone have any suggestions about what else I can investigate?
Thanks for reading!

How to reduce the use of PermGen space in Grails

in my Grails application using the Spring Security Core plugin for authentication. I am facing a serious problem with that because my application took 21 seconds to lift the Tomcat was carrying 43/2 after installation.
So far so good, but began to occur error 'PermGen Error' memory error Tomcat server. Before it was 64 and Aug is 256 so that the error does not crash my app so often.
I wonder whether you know some plugin configuration in order to reduce the incidence of this error or some method to effect the release of this cache because the number of users is increasing and if you can not solve it unfortunately have to leave the plugin I seems to be an excellent choice for application security.
Someone could tell me if the amount of plugins used in an application interference has this memory?
PermGen is a part of memory to store the static components of your app, mostly classes. Literally it will not be affected by either the amount of users or logs associated with user activities, which consumes heap space instead.
To reduce PermGen storage, you have to check your code, redesign those algorithms which contains unnecessary/redundant objects and operations, and consolidate variables and functions if possible. Generally speaking, simplified code will produce smaller executable files. That's the way you save the PermGen space.
Some versions of Tomcat permgen more than others. There was a minor version in the 6 line that I couldn't every get to reliably stay running. And even with the latest versions you still need to tweak your memory settings. I use the following and it works best for me. I still get them now and again, especially if I'm doing a lot of runtime compiling. In production, it is a non-issue because all the development overhead of grails isn't there.
-XX:MaxPermSize=512m -XX:PermSize=512m -Xms256m -Xmx1024m

Grails app performance degrades over time

I have noticed that after my Grails app has been deployed for about 2 weeks, performance degrades significantly, and I have to redeploy. I am using the Spring Security plugin and caching users. My first inclination is that it has something to do with this and the session cache size, but I'm not sure how to go about verifying this.
Does it sound like I'm on the right track? Has anyone else experienced this and narrowed down the problem? Any help would be great.
Thanks!
Never guess where to optimize, it's going to be wrong.
Get a heap dump and a profile it a little (VisualVM worked fine for me).
It might be a memory leak, like it happened to me. What is your environment - OS, webserver, Grails?
I would recommend getting YourKit (VisualVM has limited information) and use this to profile your application in production (if possible).
Alternatively you could create a performance test (with JMeter for example) and performance test the pieces of your application that you suspect is causing the performance degredation.
Monitoring memory,cpu,threads,gc and such while running some simple JMeter performance tests will definitely find the culprit. This way you can easily re-test your system over time and see if you have incorporated new "performance killing" bugs.
Performance testing tools/services:
JMeter
Grinder
Selenium (Can performance test with selenium grid, need hw though)
Browsermob (Commercial, which uses Selenium + Selenium-Grid)
NeoLoad by NeoTys (Commercial, trial version available)
HP Loadrunner (Commercial, The big fish on the market, trial version available)
I'd also look into installing the app-info plugin and turning on a bunch of the options (especially around hibernate) to see if things are getting out of control there. Could be something that's filling the hibernate session but never closing a transaction.
Another area to look at is if you're doing anything with the groovy template engine. That has a known memory leak, that's sort of unfixable if you're not caching the class/results. Recently fixed a problem around this on our app. If you're seeing any perm gen errors, this could be the case.
Try to install Javamelody plugin. In our case it helped to find problem with GC.

Resources