Struts2- How to reduce the execution time - struts2

I've developed a portal using Struts2, where most of the actions are called through Ajax calls. But, I'm seeing some unexpected execution time for each action to get executed. For example, for an action where there is no DB calls or any other, where only a search box is returned, it takes about 250~300 mS.
So far I've tried the below steps, but not much improvements in the execution timings. Please advice what could be done in order to make it faster.
Removed Dev mode in Struts
Stopped using defaultStack & tried using basicStack as the interceptor stack
Enabled templatesCache
Set templatesCache.updateDelay as 60000
Edit:
I'm seeing this error even though there are no issues in the functionality. Any idea whether this has any relationship with the delay please?
ERROR finder.ClassFinder: Unable to read class [WEB-INF.classes.com.***.***.ConfigManagement]
Could not load WEB-INF/classes/com/***/***/ConfigManagement.class - [unknown location]
at com.opensymphony.xwork2.util.finder.ClassFinder.readClassDef(ClassFinder.java:785)

Struts2 Performance Tuning
Do not use interceptors you do not need. Identify and remove that from basicStack.
TimerInterceptor to identify action execution time try reduce it.
For Struts 2 versions before 2.3: the OGNL version 3.0.3 library is a drop-in replacement for older OGNL jars, and provides much better performance.

Related

How to reduce the test execution time in Appium

For Android automation tests I want to reduce the time of Execution between test cases.
Apart from using ID's is there any way around?
First, if you are using xpaths for selectors you should avoid them. Xpath is the one of the slowest selector method. If you are using id's beside of xpath and other selectors, this is the most efficient way to use selector. (you already mentioned it, you are using id's so you don't need to worry about the selectors)
Second thing to improve is waits. If you are using implicit waits and/or thread.sleep() you should get rid of them and should try to implement conditional explicit wait like waitUntilElementVisible. This gonna reduce your unnecessary wait time. And if you are also using verification methods for validating the elements which should be disappear at the page, you should keep waiting time minimum.
Third, you can use "noReset" capability as true in to your Desired Capabilities. This capability will check your emulator or device for reset needing. If there is no reason to reset, the initialization will take less.
Fourth, Turning off the animations will also reduce the execution time.
If you're targeting Android platform only it would make sense to reconsider the tool selection and switch to Espresso which is faster than Appium due to its implementation nature. Check out How to Get Started with Espresso (Android) article for more information
If you have to proceed with Appium:
Consider using the most optimal locator strategy (if possible use ID instead of XPath)
Consider using Page Object Design Pattern, it will allow you to get rid of necessary and unnecessary waits and stale elements errors due to lazy initialization routine
Consider running your Appium tests in parallel.

Frequently refreshing web page during long-running process

I've been hunting around my issue for a while, probably the best I've come up with is another Stack Overflow question: How should I perform a long-running task in ASP.NET 4?
I'm in a similar place in that I'm wanting to understand what my options are, but I don't feel I know enough specifically about MVC to come to a view. I'm using MVC 5 but with the 4.8 framework, plus I note that technologies such as SignalR have become available since this question was asked. I was wondering if any experienced MVC'ers could give me a view?
I too have a long running process. More specifically, the user is importing a file. The file is delimited so the import happens line by line. The file might be thousands of lines long. Each line will be parsed and imported in a fraction of a second but the whole operation might take several minutes.
I don't particularly need behaviour to be asynchronous, but because of the length of the entire process I want to regularly update the user on progress. I'm wondering what options I have?
I've got a vague recollection that I might have looked at this problem 20-odd years ago (Classic ASP), and solved it by regular flushes, sending a bit more of the page to the client every few seconds, but I'm trying also to use a _Layout page now, so I've sent the page back already. So I don't think I have that option, even assuming such a mechanism still exists. A bit more recently, but still a while ago, I might have used javascript to poll but everything I'm reading now seems to point me to newer technologies which I'm not sure I fully understand yet.
I'm just wondering how would you solve this problem?
I would not be performing any of the file parsing on the web server, especially if it's thousands of rows long. I would delegate this to a background service of sorts, whether that be a Lambda service in the cloud or a Windows service or a scheduled task. You could then call your SignalR hub from the background task (whatever that might be) to update the progress of the import.

How to pass thread local variable in Project Reactor

I started using project reactor. Does anyone know how can I pass thread local variables from one thread to another? I saw some methods on Hooks.java but could not figure out what is the recommended way of doing this. Can someone point me to some documentation or with a code snippet on how to do it. Thanks.
I have a working example in this github repository based on the spring-cloud-sleuth's implementation: https://github.com/gumartinm/JavaForFun/tree/master/SpringJava/WebReactive/spring-webreactive-reactor-context-enrich
The key classes are: ContextCoreSubscriber.java, SubscriberContext.java, ThreadContextEnrichmentAutoConfiguration.java and UsernameFilter.java
ContextCoreSubscriber.java:
Enables you to fill the Mapped Diagnostic Context: MDC
SubscriberContext.java:
Helper class for inserting data in the Reactor's Context.
ThreadContextEnrichmentAutoConfiguration.java:
In charge of configuring the Reactor's Hooks: Hooks.onEachOperator
UsernameFilter.java:
Example where we want to register the username information based on some HTTP header.
Reactor doesn't guarantee that the processing done by a Flux or Mono chain of operators will stick executing on a single thread. On the contrary, it performs work-stealing and lets the user switch execution context.
As such, using ThreadLocal is not very adapted to Reactor.
There is currently some work done in 3.1.0 towards providing an equivalent, at least for library authors that use Reactor, but nothing definite in place yet.
Keep your eyes peeled for 3.1.0, that should be the main theme of that release (and will probably be the focus of the second upcoming milestone, M2).

Am I missing potential problems with custom page caching in Rails 3?

I use rails to present automated hardware testing results; our tests are run mainly via TCL. Recently, we have implemented a "log4TCL" which is basically a translated version of log4J. The log files have upwards of 40000 lines, each of which is written to the database as a logline record, and load time for the view is too long to be considered usable. I have tried to use ajax requests to speed things up, but the initial query/page load accounts for ~75% of the full page load.
My solution is page caching. I cannot use the rails included page caching because each log report is a different instance of "log_viewer". The report is generated using a test_run_id parameter. Rails-included page caching only caches one instance of "log_viewer.html". What I need is "log_viewer_#{test_run_id}.html". I have implemented a way of doing this. The reports age out after one week and are purged from the test_runs/log_viewer_cache directory to save disk space. If an older report is needed, loading the page re-generates the report with a fresh age-out timer.
I have come to the conclusion that this is the way to go. My concern is that I have not found any other implementations such as this anywhere which leads me to believe that I have missed an inherent flaw in my design. Any input would be much appreciated.
EDIT: For clarification, the "Dynamic" content of this report is what takes too long to load. I need to cache multiple instances of what action/fragment caching is not concerned with.

How do I get a more detailed transaction trace with the Ruby New Relic agent

I'm running a rails 3.0 application on Heroku and using the New Relic addon/service.
I have been looking at the transaction traces feature (available in the pro version) to understand a little more about the performance characteristics of the application. However, a significant portion of time (30-50%) is "uninstrumented time". After making a few stabs by putting method_tracers in some places and going through the reasonably slow cycle to test whether I get more info, I'm feeling this is going nowhere fast.
It seems in the PHP new relic agent they have a great feature to get very detailed traces without needing to guess where to put method tracers: http://newrelic.com/docs/php/php-agent-faq#top100
Is there anything similar to this for ruby?
Note: I'm already using rpm_contrib to get some more info and have garbage collection stats enabled. Also, this is not about fixing a performance problem, just understanding how to better use the performance tools available and scratch a niggling itch about that uninstrumented time.
There isn't currently anything similar for Ruby. I'll mention it to the Ruby engineer when I get a chance. My guess is unless a lot of requests come in for it, it won't be at the top of the list for a while, though. In the meantime, you can use the method tracers to figure out the uninstrumented time.
Hope that helps.
Method tracers can work well, but if you have a lot of code in your controller, try a binary search using trace_execution_scoped, which records the time spent in a block of code:
http://newrelic.github.com/rpm/NewRelic/Agent/MethodTracer/InstanceMethods/TraceExecutionScoped.html#method-i-trace_execution_scoped
Add a couple calls to this, give each metric a sensible name like "Custom/MySlowControllerAction/block0" (first argument to trace_execution_scoped), and repeat.
The metrics you name will show up not just in Transaction Traces, but also in the Performance Breakdown for the controller action under the Web Transactions tab, so you'll see average time in that block of code across all requests, not just the slow ones.

Resources