I am using MVC5. at every 15 minute or before it session is expired
below is the code that i have added in web.config file
authentication mode="None"
sessionState mode="InProc" timeout="20"
Please Help me.
Thank you
InProc session storage is in-memory and tied to the process. In other words, it's volatile: both because the memory can be reclaimed and the process can be killed. In particular the App Pool is set to recycle periodically by default and can also crash, or IIS or the server itself could be restarted. All of these will destroy any active sessions.
InProc is really only viable in development, to save you from having to set up an actual session store, just to play with some code. In production, you should always be using something else, like SQL Server or Redis. Even in development, it's important to realize that, again, since it's tied to the process, doing something like stopping and restarting debugging will kill the IIS Express process and thus your session state.
You can increase the session timeout by adding below lines in web.config
timeout in Minutes
<sessionState timeout="120">
</sessionState>
If you're hosting your website in IIS and you've not made any request to your website for 15 minutes, then it's likely the app pool has been recycled. It means that any data that was stored in memory (like the session state as you specified mode="InProc") has been lost and a new thread was created when accessing the website after 15 minutes. You would easily notice if this is the case as the app pool start up time can take 10 to 30 seconds - which explains why the first request take so long to render and subsequent requests are much faster.
If this is happening on your local machine, any re-compilation of your code would have the same effect.
Another possibility is that you are behind a load balancer and that the second request doesn't go to the same physical server which served the first request. (obviously, the second server has no knowledge of what's in memory on the first server)
For those reasons, it's best to avoid mode="InProc". You wouldn't want to use mode="InProc" in production, and it's better to stick with the same settings in Development, so you can see any issues early.
I have a similar setup
<sessionState mode="InProc" cookieless="false" timeout="20" />
setting it to 20 minutes.
I've got a site which has an export feature. This feature can export parts of the database up to a full database export. This has been optimized a lot but still requires some 90-180 seconds to finish. Debugging and time outs aren't an issue, but live I receive a 504 gateway time out error after about 90 secs. I am guessing that IIS gets tired of waiting for the backend to respond and returns a 504. Is there any way to specify a longer time out, e.g. 5 minutes?
I've got an old executionTimeout setting set to 3600 which doesn't seem to do much any more (I believe it's an IIS <7 setting).
I've also tried this suggestion form another Stackoverflow question:
<configuration>
<system.applicationHost>
<webLimits connectionTimeout="00:01:00"
dynamicIdleThreshold="150"
headerWaitTimeout="00:00:30"
minBytesPerSecond="500"/>
</system.applicationHost>
</configuration>
The above doesn't work, the config file is broken. Is the above supposed to work?
Main question: how/can I increase waiting time in IIS to avoid 504s?
This wasn't an IIS issue. I believe the old way of providing a timeout worked. We've got a reverse proxy with a short timeout, this was changed to a longer timeout which effectively resolved the issue.
I am expericencing heavy performance problems with generating PDFs while using Jasper Reports in my grails application. I am invoking the jasperService:
def reportDef = jasperService.buildReportDefinition(parameter, LocaleContextHolder.getLocale(), [data: emptyData])
Running in Jboss several times, performance is good. After X hours, performance is 100+ times worse than after the start of Jboss... Response time is changing from 7-12 seconds to several minutes for creating a PDF with one single page. I am sure, that the performance lag is within this invocation, because I have added time measurements around it. As the report data is passed within the parameters, I can exclude also data base connection issues.
I have analyzed the HEAP, but it is used ~50% and not changing much during PDF creation. Overall memory is also not fully used.
I have analyzed the PermGen, but it is also far from being full.
The CPU ist permanently at 100% during creation, which is ok, knowing that PDF creation is very CPU consuming. I have ensured that no other process is holding the PDF creation up, 1st by restarting the process several times and measuring no difference, so I can exclude external interruption and 2nd) knowing that performance is much better if JBoss is restarted.
Due to the facts, I have started to analyze the JBoss itself by analyzing the Thread dumps while running the PDF creation thread. I see that nothing else is running (except the thread dumping thread), neither when it is slow nor fast after restart. I can just see that in several Thread dumps Groovy is making several AST transformations which is not strange for Groovy...
Now, I am despaired. HEAP/PermGen is ok, CPU is ok. What the hell is Jasper Reports / Grails doing?
Maybe someone has made similar experiences or an idea for the root cause? Is there something which needs/should to be cleaned up in Jasper Reports?
EDIT: My further analysis yield to the unproofed but certain outcome that JBoss 7.1.1 (latest stable) is the root cause. After installing the app on a Tomcat, everything runs smoothly, also after several days. I'll keep this open. Maybe someone has made same experience and likes to post it...? Otherwise, I will close it with this solution. I will maybe test my app on earlier versions of Jboss or 7.2/7.3.
The solution was that we haven't perceived that JBoss was partially ignoring our Log4J configuration and was massively logging into the server.log which we were not monitoring. Jasper and Grails plugins were writing dozens of MB for each PDF generation into the log file. After removing these log inserts, performance was good again.
We have a MVC 3 application which has been deployed onto a newly built Windows 2008 R2 Web Edition server which is performing badly.
This application has been through development, quality assurance and user acceptance testing cycles on the same operating system (different boxes) with no performance issues.
The only difference we can see with the server is that it sits in the DMZ and as such has two network adapters configured, one for the internet, and one to punch through the firewall.
We have put all sorts of logging into the application and confirmed that up until the 'return ActionResult' everything is working correctly (ie ~500ms). It then takes 15 seconds to render the page.
We have tried turning on debug=false in the config file, i'm not sure what else to look for here, it seems like an environment issue.
Any suggestions please ? I am about to investigate if the thread pool size could be causing problems.
Also, if it helps the page is using multiple partial views, i have read others having problems with them.
Thanks,
Matt
Since the application performs ok in other environments I would suggest you investigate following:
Database - are you running against different database? How long the queries execute? If you have non-optimized database with million records on production, and only few records in test you want find performance problems soon enough.
Network - what is the latency between web box and database? If you loose 100ms for each database query just because of network than if your page triggers 50 queries you've lost 5secs. I've seen poorly configured routers / load balancers that were doing just that.
Try profiling each component of your system (db, network, web box) in order to find out where you're wasting all that time. Try http://code.google.com/p/mvc-mini-profiler/.
PS. You MUST have debug=false in your prod env.
I have a simple Rails app deployed on a 500 MB Slicehost VPN. I'm the only one who uses the app. When I run it on my laptop, it's fast enough. But the deployed version is insanely slow. It take 6 to 10 seconds to load the login screen.
I would like to find out why it's so slow. Is it my code? (Don't think so because it's much faster locally, but maybe.) Is it Slicehost's server being overloaded? Is it the Internet?
Can someone suggest a technique or set of steps I can take to help narrow down the cause of this problem?
Update:
Sorry forgot to mention. I'm running it under CentOS 5 using Phusion Passenger (AKA mod_rails or mod_rack).
If it is just slow on the first time you load it is probably because of passenger killing the process due to inactivity. I don't remember all the details but I do recall reading people who used cron jobs to keep at least one process alive to avoid this lag that can occur with passenger needed to reload the environment.
Edit: more details here
Specifically - pool idle time defaults to 2 minutes which means after two minutes of idling passenger would have to reload the environment to serve the next request.
First, find out if there's a particularly slow response from the server. Use Firefox and the Firebug plugin to see how long each component (including JavaScript and graphics) takes to download. Assuming the main page itself is what is taking all the time, you can start profiling the application. You'll need to find a good profiler, and as I don't actually work in Ruby on Rails, I can't suggest any: google "profile ruby on rails" for some options.
As YenTheFirst points out, the server software and config you're using may contribute to a slowdown, but A) slicehost doesn't choose that, you do, as Slicehost just provides very raw server "slices" that you can treat as dedicated machines. B) you're unlikely to see a script that runs instantly suddenly take 6 seconds just because it's running as CGI. Something else must be going on. Check how much RAM you're using: have you gone into swap? Is the login slow only the first time it's hit indicating some startup issue, or is it always that slow? Is static content served slow? That'd tend to mean some network issue (either on the Slicehost side, or your local network) is slowing things down, assuming you're not in swap.
When you say "fast enough" you're being vague: does the laptop version take 1 second to the Slicehost 6? That wouldn't be entirely surprising, if the laptop is decent: after all, the reason slices are cheap is because they're a fraction of a full server. You're using probably 1/32 of an 8 core machine at Slicehost, as opposed to both cores of a modern laptop. The Slicehost cores are quick, but your laptop could be a screamer compared to 1/4 of core. :)
Try to pint point where the slowness lies
1/ application is slow, or infrastructure (network + web server)
put a static file on your web server, and access it through your browser
2/ If it is fast, it is probable a problem with application + server configuration.
database access is slow
try a page with a simpel loop: is it slow?
3/ If it slow, it is probably your infrastructure. You can check:
bad network connection: do a packet capture (with Wireshark for example) and look for retransmissions, duplicate packets, etc.
DNS resolution is slow?
server is misconfigured?
etc.
What is Slicehost using to serve it?
Fast options are things like: Mongrel, or apache's mod_rails (also called passenger phusion or
something like that)
These are dedicated servers (or plugins to servers) which run an instance of your rails app.
If your host isn't using that, then it's probably defaulting to CGI. Rails comes with a simple CGI script that will serve the page, but it reloads the app for every page.
(edit: I suspect that this is the most likely case, that your app is running off of the CGI in /webapp_directory/public/dispatch.cgi, which would explain the slowness. This tends to be a default deployment on many hosts, since it doesn't require extra configuration on their part, but it doesn't give good performance)
If your host supports "Fast CGI", rails supports that too. Fast CGI will open a CGI session, and keep it open for multiple pages, so you get much better performance, but it's not nearly as good as Mongrel or mod_rails.
Secondly, is it in 'production' or 'development' mode? The easy way to tell is to go to a page in your app that gives an error. If it shows you a stack trace, it's in development mode, which is slower than production mode. Mongrel and mod_rails have startup options to determine whether to run the app in production or development mode.
Finally, if your database is slow for whatever reason, that will be a big bottleneck as well. If you do have a good deployment (Mongrel/mod_rails/etc.) in production mode, try looking into that.
Do you have a lot of data in your DB? I would double check that you have indexed all the appropriate columns- because this can make a huge difference. On your local dev system, you probably have a lot more memory than on your 500 mb slice, which would result in the DB running a lot slower if you have big, un indexed tables. You can also run the slow queries logger in MySql to pinpoint columns without indexes.
Other than that, yes- passenger will need to spool up a process for you if you have not been using the site recently. If this is the case, you should see a significant speed increase on second, and especially third and later page loads.
You might want to run a local virtual machine with 500 MB. Are you doing a lot of client-server interaction? Delays over the WAN are significant
You might want to check out RPM (there's a free "lite" version too) and/or New Relic's Tune Up.
Your CPU time is guaranteed by Slicehost using the Xen virtualization system, so it's not that. Don't have the other answers for you, sorry! Might try 'top' on a console while you're trying to access the page.
If you are using FireFox and doing localhost testing (or maybe even on LAN) you may want to try editing the network.dns.disableIPv6 setting.
Type about:config in the address bar and filter for network.dns.disableIPv6 and double-click to set to true.
This bug has been reported mainly from Vista OS's, but some others as well.
You could try running 'top' when you SSH in to see which process is heavy. If you also have problems logging you, perhaps you may try getting Statistics in the Slicehost manager.
If you discover it is MySQL's fault, consider decreasing the number of servers it can spawn.
512 seems decent for Rails application, you might have to check if you misconfigured too.