Http Request Timeouts on Azure Web Apps - asp.net-mvc

We have a collection of MVC 5 websites running on the Azure Cloud Hosting platform. We have several different versions of the environment with which these websites run (Development, Staging, Production), and we are experiencing a very difficult to troubleshoot issue. It seems that, intermittently, when a request is made to the production environment, the request will be rejected, or the file will be served slowly to the point of the server timing out and aborting the request. This only seems to occur in the production environment, and does not appear to happen in development or staging.
Given that our websites just recently started receiving traffic, the production environment is actually the lesser used in this case, so it is not a matter of the machine being out of resources. Also, we have the capability to monitor the resources of the machine through a web ui, and we do not see any issues here.
When configuring these systems, we do not have a ton of control over how they are set up. To that end, it's unlikely that there is a configuration difference between them, as they are setup (presumably) from an image, and configured through a web UI. The settings on these systems is the same between them as far as we can tell. To ensure its not an issue of configuration of the machine, we have mirrored the production environment by recreating it, and we're still experiencing the same issue.
The websites in our environments are secured via SSL certificates. In order to remove that as the potential culprit, we've turned SSL off on our production site and tested it. This didn't seem to fix the issue and we still got intermittent failed requests.
We thought it may have to do with Routing, and MVC handling the files, so we've attempted downloads of static files (Images, javascript files) as well as dynamic files (Views, bundled javascript), and we still get these failed requests. In our bundling configuration, we do not override the default RouteExistingFiles value, so MVC should not be handling the routing of static files (as I understand it, at least, please correct me if I'm wrong)
Our tests are run against the primary domain name on the account, and it doesn't appear that the issue is attributed to anything DNS related.
It doesn't seem to have anything to do with our database connections, as we do not hit our database when serving up static files, nor when we load our login page (which we tested against primarily)
We are really running short on ideas with regard to what might be causing this issue, and we were really hoping someone out there may have experienced a similar issue with the Azure Cloud platform? Alternatively, if anyone has any suggestions, they would be greatly appreciated.

Try enabling the "Always On" feature, this can be done via the web app settings.
How Azure App Service works is that the site content is stored on a file server, and then the site is loaded by a web worker when it is active. If the site is idle it will be unloaded. When the new request comes in, the site will have to be reloaded, which may cause the slowdown you are experiencing. In addition, content may be ngen'd again, which would further contribute to the slowdown.
Documentation for configuring web apps is here: https://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/

Related

ICS Overbyte simple HttpProxy

I'm doing a simple proxy work, it can listen locally and the requests are forward to a real external online proxy, I tried many but without results.
my online proxy for testing
194.163.148.227:3128
There is an IcsHttpProxy tool that is supposed to do this work, but I encountered a lot of obstacles, including there are many options related to security certificates. I do not know what to set in the that when sites can have a lot of differences such as version of certificate, key strength, etc., Do I have to set it manually for each site?
In fact, I did this project using Indy's tools and it done without a problem and without much work and without going into the details of the certificates and what is related to them, but I am interested in doing this project using Overbite ICS tools.
All i need is a simple proxy that forwarding browser requests to an external proxy without entering into the details of security certificates and what is related to them. Is this possible?
My Idea

Azure website suddenly responds slowly

I have a Azure website consisting of a WCF endpoint and a MVC website running on Azure. It runs on a basic medium/large tier - so no cap in CPU as Free or Basic has. This has been running perfectly for 6 months probably, with regular deployments and updates. And performance has as expected kept consistent. But now suddenly it takes forever to load the MVC website.
The flow is as follows; we receive a call via the WCF endpoint and then we direct people to a URL that is the MVC web site. All resides on same "web site" inside Azure.
The strange thing is that I can see no difference in my log files. The WCF endpoint responds as quickly as always and from what I can see the heavy lifting inside the MVC also responds as expected, but still the user is left waiting forever on the specified URL?
As said I can't see anything in the performance logs for the MVC controllers, so somehow it seems to be the https request itself that takes ages, but how do I debug or measure this?
I am in the process of getting Visual Studio 2015 to see the remote profiling that can be generated through KUDO - but somehow I don't think that the problems resides here. I am kind of blanking so any thoughts on what could be wrong and how to debug would be appreciated. Also if anyone knows that Azure has released something within the last couple of weeks that might have slowed the application down.
Any chance that you have Application Insights turned on for the MVC site? It has a feature that will track dependency calls and should be able to give you a good idea of what is taking a long time.
https://azure.microsoft.com/en-us/documentation/articles/app-insights-asp-net-dependencies/

Rails: What is the use of web servers (Apache / nginx / passenger)?

Hi I've been learning rails for the past half year and have a few apps up on Heroku. So for me I thought deploying apps onto the world wide web was just as simple as heroku push. However, I just got my first internship doing Rails and one of my seniors is talking about Apache and Nginx and I'm not sure how they fit in the picture, since I thought apps consisted of only Rails + cloud app platform. I have looked it up but I still don't get how and where it affects my app life cycle. Can someone explain what/where/when of using web servers?
So you've got your Rails app, and as you know you've got controllers and actions and view and what not.
When a user in their browser goes to your app on Heroku, they type in the URL which points to the Heroku servers.
The Heroku servers are web servers that listen to your users that type in the URL and connect them to your Rails application. The rails application does its thing (gets a list of blog posts or whatever) and the server sends this information back to your user's browser.
You've been using a web server the whole time, just it was abstracted away from you and made super simple thanks to Heroku.
So the life cycle is somewhat like this:
Whilst you've been building your applications on your development machine you've probably come across the command rails server. This starts a program called WEBrick which is a web server, and listens on port 3000. You go to your app via http://localhost:3000.
WEBrick listens on port 3000 and responds to requests from users, such as the "hey give me a list of posts" command.
When you push your code into production (in your experience via heroku push) you're sending your code a provider who takes care of the production equivalent of rails server for you.
A production setup (which your senior developers are talking about) is a bit more complex than your local rails server setup on your development machine.
In production you have your Rails server (often things like Unicorn, Passenger) which takes the place of WEBrick.
In a lot of production setups, another server, such as Apache or nginx is also used, and is the server that the user connects to when they go to your application.
This server exists often as a bit of a router to work out how different types of requests should be handled. For instance, requests to static files (css, images, javascript etc) that are storted on the server might just be processed directly by Apache or nginx, since it does a fantastic (and fast) job of sending static assets back to the client.
Other requests, such as "get me a list of all blog posts" get passed onto the Rails server (Unicorn, Passenger etc) who in turn do the required work and send the response to Apache/nginx, who send it back to the client.
Heroku does all this for you in a nice easy to use package, but it sounds like the place your working at manages this themselves, rather than using Heroku. They've setup their own bunch of web servers, and will have their own way doing an equivalent of heroku push which will send the code to the servers, and make sure they're up and running ready to respond to user requests.
Hope that helps!
Web Pages need a Web Server to make them available on the Internet.
So a site that is all static content (all just .html pages) just needs a web server and that's where Apace, nginx, etc come in. They are web servers.
When you use frameworks like rails, an additional component is added, an application server. This pre-processes the pages using the rails framework and then (still) uses the above mentioned web server to make the final pages (which are .html of course) available to the end users through their browser.
Passenger Phusion is an application server that, with rails will help manage and automate the deployment of code.
Heroku is a cloud service, meaning they take care of hardware and software allowing you to seamlessly publish you application without worrying about what is going on behind the scene. So the only thing you have to do is push your code to their Git and voila.
On the other hand, Rails can also be deployed on a system built by you completely from scratch, and you will be the responsible not only for the app development but also for the server maintenance and choice of the hardware and/or software. You could then choose between several application servers capable of running rails such as ngix.
Hope that helps.

heavy RoR app horizontally scaled on AWS needs efficient SSL

I am running a Rails app on the AWS infrastructure using several EC2 instances, a RDS DB, a round robin session-sticky load balancer and Route 53.
The application is serving pages for several domain names (same app looks and functions different depending on domain name).
The Rails code is hosted on a NFS share on a staging instance where the web server is running in development mode, while the other boxes load the apache config and application code via NFS and run in production mode.
What I'd like to do is to SSL-enable the whole thing as we're starting to process payments and whatnot. Due to the nature of the application and the heavy apache/Passenger optimization in place, I can't set up a vhost for each domain, but rather use a wildcard for www.* to load pretty much the same code, and the app does the rest internally.
Haven't really been able to figure out an ideal way to resolve this. Would anyone have an idea?
Thanks!
After a bit of discussion in the comments we came to this conclusion:
The application is currently hosted in one single <vhost> on Apache where the Application does the differentiation between hostnames for the different layouts.
The problem here is to support SSL without having to setup each domain with it's own certificate and a different vhost as that would require running the Rails app multiple times where it's unnecessary.
By using a Multiple Domain Certificate (MDC) this problem can be solved with only one vhost and one certificate, but MDCs are more expensive than normal certificates. So depending on the number of domains you need to support it may be cheaper/easier to just do it manually with multiple certs, or opt to pay the more expensive MDC but save time and maintenance cost.
While at it I found this nice wikipedia comparison of Certificate Authorities and their trust level in different browsers:
http://en.wikipedia.org/wiki/Comparison_of_SSL_certificates_for_web_servers

ASP.net MVC performance with extensionless url on IIS 6

We are getting ready to do a an initial deployment of an ASP.net MVC app on IIS 6 running on Windows Server 2003. We've been reading about performance issues involving the use of extenionless urls in MVC applications specifically in the case of removing the '.aspx' extension from the controller portion of the url.
Has anyone who has deployed an MVC app in the past experienced any performance degradation in this area? Was it noticeable, and was it worth it for having the cleaner URLs? Our application will rarely have to deal with more than 1000 or so concurrent users.
Edit: Thanks for all the responses, it's working quite well, although there are a few strange requests going through as some people mentioned, I think we can work around these using the suggestions mentioned here.
We recently deployed an app that received approx. 20 million page views over a 3 month period using the IIS 6 wildcard mapping setup and had no performance issues. We did host most of our images on a CDN, but other static content was served directly from the site.
For what it's worth, IIRC, the asp.net handler will pass requests for static file types back to IIS through a default handler for processing. The only practical performance hit is the time during that process that a worker thread is occupied identifying and transferring the request. In all but the most extreme scenarios, this is too trivial to matter.
As an extra note, we load tested the application I mentioned prior to going live and found that it could handle nearly 2000 static requests per second and around 700 requests per second for pages that involved database activity. The site was hosted on 4 IIS 6 servers behind a ZXTM load balancer with a 1GB internet pipe.
Here's a link with some good advice on the whole static file handling business:
http://msmvps.com/blogs/omar/archive/2008/06/30/deploy-asp-net-mvc-on-iis-6-solve-404-compression-and-performance-problems.aspx
The problem with not using extensions on IIS 6 is that you don't want static requests to go through the ASP.NET stack. If all of your static requests come from one (or two...) subfolder(s), you can exclude them. This should fix the performance issue.
Quoting from the linked post:
Now, to remove the wildcard map on the
/Content subdirectory, open a command
prompt, go to c:\Inetpub\AdminScripts,
and run:
adsutil.vbs SET /W3SVC/105364569/root/Content/ScriptMaps ""
… replacing 105364569 with the
“identifier” number of your
application. (Also, you could replace
“Content” with the path to any other
directory.)
We ran a fairly busy site with IIS6 wildcards on for extensionless URLs and although we never noticed much of a performance hit, we did have a little hack that worked quite well:
For all folders that contained only static files, like /css, /images, /scripts etc, in IIS we set them as their own application, and disabled the wildcard setting, which meant IIS handled the requests rather than routing through ASP.Net.
Url rewriting can help you to solve the problem. I've implemented solution allowing to deploy MVC application at any IIS version even when virtual hosting is used.
http://www.codeproject.com/KB/aspnet/iis-aspnet-url-rewriting.aspx
Instead of serving all the requests by ASP.NET, you could specify e.g. mvc as the extension (say index.mvc) and map that extension to aspnet_isapi.dll in IIS 6.
This means only known extenions will be processed by asp.net, others like static files stay the same as before i.e. served by IIS itself.

Resources