Orchard CMS 1.5 Very Slow to Load Pages - asp.net-mvc

I am developing my first site using Orchard 1.5 and I am concerned about the speed of the site. I realize that in development mode that pages are being compiled dynamically which will cause slowness, but I have done the following to set the site to production mode, and still, from page to page, each request is taking anywhere from 2 to 6 seconds to display. Here is what I have done:
Built the solution using a "Release" build
Logged out from the site (viewing as anonymous)
Set the application as the root site in IIS
Disabled the "Shape Tracing" module
Set the <compilation debug="false" ...> in the web.config
Set the theme to the base TheThemeMachine theme
I only have 5 pages of very basic content and the home page contains only the default content from the setup of Orchard. All pages are slow to load. Here is my site map:
Home (2 sec load)
About Us (2 sec load)
Bios (a projection page - 6 sec load)
John Doe (2 sec load)
Mary Jane (2 sec load)
With these settings in place, the page load times are still unacceptably slow. I am only testing this on my local machine and haven't rolled it out to the production server yet, but my machine is a robust quad-core machine, running Windows 7 with 8GB of RAM, so I don't see how it's much different than our production servers. Since all the requests are local, the network bandwidth is a non-issue. The only thing that would be different than in full production is that the application is accessing the SQL server on the network, but I can't imagine that there is that much SQL traffic.
FYI - I am checking the load time from FireBug and only using the value from the initial GET to the server and not any ancillary requests.
Is what I am seeing normal for an Orchard site, or what other changes should I make to optimize performance? When I go to www.orchardproject.net, it is very snappy (<300ms response) even with all their content, so why is my simple configuration so slow?

Why not run the Miniprofiler to measure where the slowdowns are before trying to optimize? You can get a module for Orchard to make it easy to plugin.

I've just added the Caching module to Our new site as well as added all pages to the performance settings page that is standard with 1.5, this helps a lot.

My site was getting progressively slower.. and I finally noticed that I had almost 50k comments! Most were marked as spam, but they were still filling up the database. I'm trying to clean them out now and will find out if this helps things or not (I'll update when I do)

Related

ASP.NET MVC website performance issue on Azure AppServices

We have a ASP.NET MVC5 website hosted on Azure AppServices.
We have 2 distinct instances of this site on Azure: 1 for tests and 1 for production.
This 2 instances are in distinct Azure plans, but all services considered in each instance is in the same region (Western Europe).
The first one seems to work in an acceptable manner, but we are facing performance issues loading some pages on the 2nd one (sometimes from 15s to +30s page load times).
Each of our application instance is composed of:
ASP.NET MVC 5 (with FormsAuthentication)
N-Tiers Architecture
EntityFramework 6.1.3
ApplicationInsights service
2 SqlServer Databases (1 for business data & 1 for security data) located in a Azure Sql Service
The Azure plan used is "Basic (Small)" for AppServices, and "S0 Standard (10 DTUs)" for SqlServices.
The 1st one is running around 5% for CPU and 58% for Memory. The 2nd one is running around 3% for DTU.
With AppInsights, I've seen that "all is ok in controller" and the problem might comes from below.
I've also detected some page loads having the issue presents a failed Sql dependency call (with result code 207).
The Sql requests respond times are also separately ok (under 300ms).
We have, of course, already read a lot of posts about Azure performance issues but nothing that has helped us.
We would really appreciate some help please.
Many thanks!
Enable the profiler in Application Insights (same thing that used to live under https://azureserviceprofiler.com). It's now under the Performance blade.
Stress test your application for a few hours, enough for a good amount of ETL traces to be collected so it can paint a comprehensive picture of where time is being spent. A tiny "trace" icon will then become available next to your controllers:
Results look like this:

MVC3 site running fast locally, very slow on live host

So I've been running into some speed issues with my site which has been online for a few weeks now. It's an MVC3 site using MySQL on discountasp.net.
I cleaned up the structure of the site and got it working pretty fast on my local machine, around 800-1100ms to load with no caching. The strange thing is when I try and visit the live site I get times of around 15-16 seconds, sometimes freezing up as long as 30 seconds. I switched off the viewstate in web.config and now the local loads in 1.3 seconds (yes, oddly a little longer) and the live site is down to 8-9 seconds most of the time, but that's still pretty poor.
Without making this problem to specific to my case (since there can be a million reasons sites go slow), I am curious if there are any reasons why the load times between the local Visual Studio sever or IIS Express would run so fast while the live site would run so slow. Wouldn't anything code wise or dependency wise effect both equally? I just can't think of a reason that would affect the live site but not the local.
Any thoughts?
Further thoughts: I have the site setup as a sub-folder which I'm using IIS URL Rewriting to map to a subdomain. I've not heard of this causing issues before, but could this be a problem?
Further Further Updates: So I uploaded a simple page that does nothing but query all the records in the largest table I have with no caching. On my local machine it's averages around 110ms (which still seems slow...), and on the live site it's usually over double the time. If I'm hitting the database several times to load the page, it makes sense that this would heavily affect the page load time. I'm still not sure if the issue is with LINQ or MySQL or MVC in general (maybe even discountasp.net).
I had a similar problem once and the culprit was the initialization of the user session. Turns out a lot of objects were being read/write to the session state on each request, but for some reason this wasn't affecting my local machine (I probably had InProc mode enabled locally).
So try adding an attribute to some of your controllers and see if that speeds things up:
[SessionState(SessionStateBehaviour.Disabled)]
public class MyController : Controller
{
On another note, I ran some tests, and surprisingly, it was faster to read some of those objects from the DB on each request than to read them once, then put them in the session state. That kinda makes sense, since session state mode in production was SqlServer, and serialization/deserialization was apparently slower than just assigning values to properties from a DataReader. Plus, changing that had the nice side-effect of avoiding deserialization errors when deploying a new version of the assembly...
By the way, even 992ms is too much, IMHO. Can you use output caching to shave that off a bit?
So as I mentioned above, I had caching turned off for development, but only on my local machine. What I didn't realise was there was a problem WITH the caching which was turned on for the LIVE server, which I never turned off because I thought it was helping fix the slow speeds! It all makes sense now :)
Fixing my cache issue (IQueryable<> at the top of a dataset that was supposed to cache the entire table.. >_>) my speeds have increased 10 fold.
Thanks to everyone who assisted!

How are people solving app pool recycle issues on deployment with large apps?

Currently after a build/deployment of our app (58 projects, large asp.net MVC 3 front end) takes ~15-20secs to load as it goes through the whole 'recycling the app pool' (release configuration).
We do have a web farm if that alters people's answers, but the question really is:
What are people doing in large scale applications where a maintenance window isn't viable (we're a 24/7 very active website) to minimize that initial 'first hit' on the app pool recycle after a deploy?
We've used a number of tools to analyze that startup time and there doesn't really seem to be any way to bring it down so what I'm looking for are what techniques do people employ in order to minimize the impact of a large application deploy affecting users.
By default - if you change 15 files in an ASP.NET application at once (even via FTP) then the app pool is automatically recycled. You can change the number of files but as soon as web.config and bin files are changed then it needs to recycle. So in my opinion the ideal solution for an environment like yours would be as follows:
4 web servers (this is an arbitrary number)
each server has a status.aspx that the load balancer looks at - use TeamCity to take 2 of these servers "off line" (off the load balancer) and wait 20 seconds for the traffic to filter across. A distributed cache will help keep user experience problems
Use TeamCity to deploy to those 2 servers - run your automated tests etc. and once you are happy put those back into the farm and take the other 2 offline and deploy to those
This can all be scripted / automated. The only issue with this is any schema changes that are not backwards compatible may not allow running the new version site in parallel with old version of the site for the 20 seconds for the load balancer to kick back in
This is good old fashioned Canary Releasing - there are some patterns here http://continuousdelivery.com/patterns/ to help take into consideration. Id also suggest a copy of that continuous delivery book - its like a continuous delivery bible and has got me out of a few situations :)
At the very base you could run a tinyget script against the application after completion of deployment which will "warm up" the application however if a customer hits your site before the script can run, they will still face a delay. What do you currently have in place, what post deployment steps do you have in place?
In a farm environment you could stage deployments too, so take one server out of load balance, update it and then bring that online after deployment and take the other out, complete the deployment and then reintroduce into the farm. How is your SQL Server setup - clustered?
copy and paste from my post here
We operate a Blue/Green deployment strategy on a 4 tier architecture which has a web site over 4 servers at the top tier. Due to the complexity the architecture introduced for deployments, we needed a way to deploy without disturbing any traffic to the "live" site. Following Fowler's advice, but not quite in the same way, we came up with a solution that means we have 2 sites on each server (a blue and a green, or in our case site A and site B). The live site has the appropriate host header, and once we have deployed and tested to the non-live site, we then flip the headers of the 2 sites so that what was once live is now the non-live site, and vice-versa. The effect is, a robust deployment that can be done in business hours and with the highest level of confidence.
This of course complicates your configuration and deployment slightly, but it's worth the effort. I guess it kind of goes without saying that you want to script both the deployment, and the host header swapping.
Firstly, unless you're running Google or something bigger, does a 15-20s load time at 3am for a handful of users really impact that much? I'd say the effort invested in eliminating the occasional lag would far outweigh the 15-20s inconvenience of a couple of users.
I consider it a necessary evil of using ASP.NET unfortunately. Using a pre-compiled site (.DLLs instead of the code-behind files) will lessen the time but not necessarily eliminate it.
The best thing you can do is use something like a status notification bar to warn users they may experience some "issues" during "essential maintenance".
But even then, I'd say in terms of user experience it'd be better to keep quiet and have a handful of people blame their "slow internet" when your site takes 20s to load on one occasion, than announce to all and sundry that it will be slow.
You can also try this approach : http://weblogs.asp.net/scottgu/archive/2009/09/15/auto-start-asp-net-applications-vs-2010-and-net-4-0-series.aspx
without knowing anything about your site, my first thought is that you might be able to break it down into smaller sites so that they start faster individually.
second, with your web farm, i assume you have some sort of load balancing device in front of that from which you can pull machines out of the pool when they are being deployed. don't put them back in the pool until after you have sent a request against the site to get it started up. you should be able to script this such that you are pretty much clicking a button that takes a machine out, deploys to it, and sends a request after it's back up and happy.
You can consider using aspnet_compiler.exe to precompile your application, because I think the delay after deployment is caused by the compilation phase rather than "whole recycling the app pool".

For Ruby and Rails, how to print out the true page rendering time on the webpage?

If using in the controller:
def index
#time_start_in_controller
...
end
and at the end of view, use
<%= "took #(Time.now - #time_start_in_controller} seconds" %>
but isn't the time at the view not the true ending of rendering, because it needs to mix with the layout and so forth. What is a more accurate way (just as accurate as possible) to print out the page generation time right on the webpage?
(update: also, the console showing the log as taking 61ms, but the page definitely took 2 to 3 seconds to load, and the network I am using is super fast, at home or at work, at 18mbps or higher with a ping of maybe 30ms)
update: it is a bit strange that if I use the http performance test ab
ab -n 10 http://www.my-web-site.com:8080
it takes 3 seconds total for 10 requests. But if I use Firefox or Chrome to load the page, each page load is about 3 seconds. This is tunneling to my work's computer back to my notebook running Rails 3, but shouldn't make a difference because I run Bash locally for the above statement and use Firefox locally too.
in a typical production environment, static content (images, css, js) are handled by the web server (eg. apache, nginx etc) not you rails server. so you should check their logs as well. if you are serving static content from your rails server that could be your problem right there.
If your browser time is slow but the time taken in rails (According to the logs) is fast, that can mean many things including but not quite limited to:
network speed is slow
your dns server is slow and browser can't resolve your dns quickly (this happens with for instance if you use godaddy for your dns server they throttle dns lookups)
the requests concurrency exceeds how many threads you have in rails
one way to debug these types of performance issues is to put something in front of the rails server (for example Haproxy) and turn the logging to full. As they will show how many waiting requests there are and how long the actual request/response transferring took along with how long it took your rails thread to process.

How to increase the performance on my ASP.NET MVC 2 website?

I run a social community site for card players. I currently have 7,000+ members and getting 2,000 visitors/15k+ pageviews a day. Recently the site has started to really slow down during peak hours of the day and I am starting to think my site needs some serious performance optimizations in the code and settings. I really don't want to purchase a second server to run the site as I am pretty sure my current server should be able to handle this kind of load easily.
During peak hours, when the pages load, they still load very quickly. The problem is that a lot of times it will timeout and give a "website not available" error in the browser. Then you refresh it and it loads up quickly. Then a couple of pageviews later it will do it again. My CPU and RAM usage do not even get very high during these times, so I must believe it is in my IIS settings or something. I have done some searching and cannot find any good answers or ideas of what a fix could be.
Here are some stats of my setup:
ASP.NET MVC 2 w/ Output Caching and Partial View caching
IIS 7
Windows Web Server 2008 RC2 64-Bit
AMD Athlon II X2
4GB of RAM
My heavier pages on the site have quite a bit of database reads and a lot of image requests. I am not sure if this is the problem, because when a page does load it is VERY fast.
I did purchase a new server I am building and was thinking about switching everything to this instead. The new server I just got is gonna run an Intel Xeon X3430 2.4GHz Quad-Core w/ HT and 8GB RAM.
I am looking for a few possible things I could look into for this problem and if there are any possible solutions or settings I could implement to stop the "website not available" messages and also help my server handle future traffic increases as the site grows. Would upgrading the server to this new one make the difference?
It looks like this is more of an IIS issue than your code or hardware. There is a default setting for max concurrent connections per cpu and queue length that you may be reaching.
See Optimising IIS Performance and someone with a similar problem (and resolution).

Resources