I'm writing an ASP.NET Web API application hosted on IIS7 (no special configuration).
My problem is that for each first request (on a new machine/new browser/after a while...) there is a long delay - even on requests that return constants!
I've read about "warming up" scripts but it's not the issue here. It seems like the web server is trying to create a session and it takes very long time. Any suggestions?
EDIT
I think the delay is caused by worker-process creation for each new session. Now the question is why is it so slow, and why doesn't the web server reuse living worker-processes to serve requests?
I have configured the application pool to limit worker processes to 5 with no timeout (set to 0). This caused the first five sessions to be slow on first requests (which I can live with) and now the worker processes are alive. But surprisingly, from time to time, the request is slow again!
If you are using Windows Server 2008 R2 you could configure the Auto-Start feature on the Application Pool. Also in the properties of the application pool you should disable it from being recycled at regular intervals. Bear in mind though that while this will limit the slowness, the application pool could still be recycled by IIS. With the Auto-Start feature it will be loaded again automatically in memory, but the code in your Application_Start will be executed on the next request. So you could still observe some slowness.
Another cause can be https. Our site can run with and without https. The delay at the first page (5 to 15 seconds) occurs only with using https. This post explains the issue with https and the fix:
https issue by the MCS team
Related
We have a Vaadin 14 (Flow) application which is fronted by an Apache reverse proxy that integrates with Gluu for authentication using OpenID (mod_auth_openidc).
This is generally working fine, except when users leave their browser open with the application idle for a long time, until the max session time of the OpenID session is reached. The problem is, at that point the Vaadin client keeps trying to send heartbeat requests. This, in combination with this mod_auth_openidc issue, results in state cookies piling up and reaching a limit so that the user has to close her browser before being able to re-login.
I've tried various things (unsuccessfully) in order to get the server to instruct the browser to visit a logout URL when a heartbeat request is received after session timeout (in combination with vaadin.closeIdleSessions=true), but even if it worked it wouldn't be a solution for other browser tabs that may also be open at that time and sending heartbeat requests.
What we really want is to limit the number of times the Vaadin client retries to send the heartbeat requests (say max 3 times) and then just stop sending requests (maybe display a message to re-login).
Is this possible in any way? The current workaround is to disable the heartbeats completely, but this doesn't seem ideal (Vaadin won't detect idle UIs).
The UI instance has a ReconnectDialogConfiguration which includes reconnectAttempts property to control how many times to re-try requests (including heartbeat requests). Default seems to be 10000.
In Vaadin 14 (LTS) this can be set using PageConfigurator.
In Vaadin 18 (latest release) this is done using AppShellConfigurator
Disabling the heartbeats is the way to go. If you need something more nuanced than that, you'll need to make a change (maybe add a configuration option) in how Heartbeats work in Vaadin. Creating a ticket in GitHub could be a good place to start.
I'm using Electron, which is based on Chromium, to create an offline desktop application.
The application uses a remote site, and we are using a service worker to offline parts of the site. Everything is working great, except for a certain situation that I call the "airplane wifi situation".
Using Charles, I have restricted download bandwidth to 100bytes/s. The connection is sent through webview.loadURL which eventually calls LoadURLWithParams in Chromium. The problem is that it does not fail and then activate the service worker, like no connection at all would. Once the request is sent, it waits forever for the response.
My question is, how do I timeout the request after a certain amount of time and load everything from the service worker as if the user was truly offline?
An alternative to writing this yourself is to use the sw-toolbox library, which provides routing and runtime caching strategies for service workers, along with some built in options for helping with these sorts of advanced use cases. In particular, you'd want to use the networkTimeoutSeconds parameter to configure the amount of time to wait for a response from the network before you fall back to a previously cached response.
You can use it like the following:
toolbox.router.get(
new RegExp('my-api\\.com'),
toolbox.networkFirst, {
networkTimeoutSeconds: 10
}
);
That would configure a route that matched GET requests with URLs containing my-api.com, and applied a network-first strategy that will automatically fall back to the previously cached response after 10 seconds.
I have deployed a Rails app at Engineyard in production and staging environment. I am curious to know if every HTTP request for my app initializes new instance of my Rails App or not?
Rails is stateless, which means each request to a Rails application has its own environment and variables that are unique to that request. So, a qualified "yes", each request starts a new instance[1] of your app; you can't determine what happened in previous requests, or other requests happening at the same time. But, bear in mind the app will be served from a fixed set of workers.
With Rails on EY, you will be running something like thin or unicorn as the web server. This will have a defined number of workers, let's say 5. Each worker can handle only one request at a time, because that's how rails works. So if your requests take 200ms each, that means you can handle approximately 5 requests per second, for each worker. If one request takes a long time (a few seconds), that worker is not available to take any other requests. Workers are typically not created and removed on Engineyard; they are set up and run continuously until you re-deploy, though for something like Heroku, your app may not have any workers (dynos) and if there are no requests coming in it will have to spin up.
[1] I'm defining instance, as in, a new instance of the application class. Each model and class will be re-instantiated and the #request and #session built from scratch.
According to what I have understood. No, It will definitely not initialize new instance for every request. Then again two questions might arise.
How can multiple user simultaneously login and access my system without interference?
Even though one user takes up too much processing time, how is another user able to access other features.
Answer to the first question is that HTTP is stateless, everything is stored in session, which is in cookie, which is in client machine and not in server. So when you send a HTTP request for a logged in user, browser actually sends the HTTP request with the required credentials/user information from clients cookies to the server without the user knowing it. Multiple requests are just queued and served accordingly. Since our server are very very fast, I feel its just processing instantly.
For the second query, your might might be concurrency. The server you are using (nginx, passenger) has the capacity to serve multiple request at same time. Even if our server might be busy for a particular user(Lets say for video processing), it might serve another request through another thread so that multiple user can simultaneously access our system.
My goal is to send an email out every 5 min from my application even if there isn't a browser open to the application.
I'm using FluentScheduler to manage the tasks; which works up until the server decides to kill the application from inactivity.
My big constraints are:
I can't touch the server. It is how it is and I have to work around it.
I can't rely on a client refreshing a browser or anything else along the lines of using client side scripts.
I can't use any scheduler that uses a database.
What I have been focusing on is trying to create an artificial postback.
Note: The server is load balanced, so a solution could use that
Is there any way that I can keep my application from getting killed by the server?
You could use a monitoring service like https://www.pingdom.com/ to ping the server at regular intervals. Just make sure it hits an endpoint that invokes .NET code and not a static resource.
I have an ASP.net MVC 4 site and it gets slow on the first request.
It is not high slow but pages that use to long 1000-700 ms on load, the first time it longs 8-15seg. It occurs when i wait for 10 minutes for example and come back to make a request. The web site is not on production server yet. May it be when the app pool does not receive any request it goes to sleep?.
I have configured the new AutoStart mode in framework 4:
http://weblogs.asp.net/scottgu/archive/2009/09/15/auto-start-asp-net-applications-vs-2010-and-net-4-0-series.aspx
I think it may be the first request to the SQL Server Express 2012 (in the same server).
I have set the autoclose=off in the database.
What more can i do?. How could i see what is going on the first request to avoid that slow response?.
Thanks to every one who has apported to this question.
Finally i think it has to do with the idle time configuration in the App pool.
It was set 5 min (default) and i have set it 60 min. and now it goes fine!
Thanks to this question:
First request is very slow after website sits idle with ASP.NET MVC 3 (IIS7)
You can compile the views for faster performance.
The documentation is for MVC 3, but it should still work:
Compile Views in Asp.Net MVC 3 with Visual Studio