When i work in dashboard (edit content items or settings) sometimes i am redirected to login page.
It happens only on a virtual hosting. There is memory limit of 1280 Mb. So, Sometimes, iis logger gives log event:
A worker process with process id of '49292' serving application pool '...' has requested a recycle because it reached its virtual memory limit.
A worker process serving application pool '...' has requested a recycle because it reached its private bytes memory limit.
I don't now if drop of authorization happens because of memory limit. But on my local machine with same limit and same log event all works fine.
How can i fix drop of authorization? And why does it may happen?
1280MB should be about 4 times what an Orchard instance needs under ordinary circumstances, so the first thing to do is probably to take a memory profile and find out what's eating so much memory. You very likely have a memory leak somewhere. This is the problem you should be focusing on: appdomain restarts are expensive and leave your application unresponsive for seconds.
Now yes, that still shouldn't drop authorization, which makes me think that you also have a misconfigured server. Most likely, you didn't configure a machine key, causing a new one to be generated at each restart, which makes existing authentication tokens invalid.
But really, your real problem is that memory footprint.
Related
We have just launched a new MVC5 web site. The site uses Entity Framework for its data and also implements a couple of WebApi services for some simple AngularJS pages used on the web site.
The site has gone through development and testing without a problem, but now it is installed on an IIS 8.5 production server we are seeing the following entries in the IIS (WAS) event logs:
Here is first error:
A worker process serving application pool 'xxx' has requested a recycle
because it reached its private bytes memory limit.
Around 90 seconds later we see this error:
A worker process '4880' serving application pool 'xxx' failed to stop
a listener channel for protocol 'http' in the allotted time. The data
field contains the error number.
Which is immediately (the same time to the second) followed by a third error:
A process serving application pool 'xxx' exceeded time limits during
shut down. The process id was '4880'.
Finally, we see another Application Pool reccycle event:
A worker process serving application pool 'xxx' has requested a recycle
because it reached its private bytes memory limit.
We are currently seeing this problem approximately once per day and it does not seem to be related to site traffic/loading.
The reason we set the Application Pool to recycle on a Private Bytes consumption exceeded 4,194,304 KB (4 GB) - it normally (for perhaps 36 hours) sits at less than 1 GB, was because we had noticed that occasionally the Application Pools Private Memory consumption would increase linearly. Again we did not see this during development or local testing.
We have tried running load tests of several hundred concurrent users across the application, but have been unable to replicate this error sequence.
We have also run the application locally for extended periods of time with ReSharper's dotMemory profiler and memory snapshots do not reveal any problems.
Are there any tools/techniques available that we can run on the production server that would give us more information on what is happening?
I have a MVC3, .NET4.5 asp.net web application hosted on Azure Websites.
I am experimenting with "Free", "Shared" and "Standard" scaling configurations.
I have noticed that after a period of inactivity the compiled code get dropped from memory, or the app pool gets recycled forcing a JIT recompile.
My main question is what is time period before the compiled code gets dropped forcing a recompile? I assume this is as a result of the application pool recycling? I have come across this on standard shared hosts such as DiscountASP.
My second question is: What is the best approach to minimise this issue as I would not like my users bumping into this recompilation lag? My initial thoughts are precompilation.
Many thanks in advance.
EDIT:
I have a found a related SO post on this here: App pool timeout for azure web sites
However it seems, as like standard Shared hosting, one cannot change App Pool recycling. One has more flexibility with the "Standard" scale option, since it is dedicated. So the likely options at present are:
1) Precompilation
2) Use of "Keep alive" ping sites.
EDIT2:
1) "Keep Alive" approach seems to be working. I have a 10 minute monitor running.
I believe the inactivity period is 20 minutes by default. I haven't used web sites yet so I'm not famailiar with rescrtictions on changing settings but one quick way to keep your site activie is to use a uptime monitoring service like Pingdom (you can check one site for free at time of writing), this will ping your site regularly and prevent it from becoming idle.
I have a MVC application that is running on a hosted server. I have an issue that the host is forcing app pool recycles. They say that my RAM is limited to 128mb. While this isn't very much I'm not so sure that my app is causing this - problem is I need to understand how it works!
My app runs at about 25mb with no significant memory leak.
What I don't really understand is how this can get up to 128mb very quickly - seems to be a problem when an admin user is logged in - the user identity is wiped almost immediately.
How does memory usage fluctuate with number of users on the site?
Thanks for any help
I've read somewhere, that application pool recycling shouldn't be very noticeable to the end user, when overlapping is enabled, but in my case that results in at least 10 times longer responses than usually (depending on load, response time from regular 100ms grows up to 5000ms). Also that is not for a single request, but several ones right after pool recycling (I was using ~10 concurrent connections when testing this).
So questions would be:
In my opinion I don't do anything, that would take a long time on application start - in general, that is only IoC container and routing initialization, also even I would do something - that is what overlapping should take care, or not?
Is sql connection pool destroyed during pool recycling and could that be a reason for long response times?
What would be the best method to profile what is taking so long? Also may be there are ideas, what could take so long from IIS/.NET side, and how to avoid that.
Overlapping only means that the old worker process will be kept running while the new one is started. As soon as the new one is started, it begins handling all requests. "Started" does not mean that initialization (which might be contained in Application_Start, any static constructors in your application, or any one time, contentious tasks like proxy building) have been completed. This means that new requests will have to wait while these processes are completed, even though the "old" worker process might still be available for a short time. Also, if your application uses any kind of caching, your new caches will be "cold", meaning there will be some additional processing time required until the caches are warmed up.
Yes - your new application will have a new sql connection pool.
In my experience, in a production environment, with well tested code and an application that requires consistent, high performance, I choose to disable application pool recycling altogether. Application Pool recycling is a "feature" introduced to combat the perception that IIS was unstable, when in fact what was usually really unstable was the applications that it was hosting. In my opinion, it is a crutch that allows people to deploy less than stable code. If it is causing you problems, turn it off and make sure your application doesn't have any memory leaks, etc. that might lead to long term application instability.
I have a website that is hanging every 5 or 10 requests. When it works, it works fast, but if you leave the browser sit for a couple minutes and then click a link, it just hangs without responding. The user has to push refresh a few times in the browser and then it runs fast again.
I'm running .NET 3.5, ASP.NET MVC 1.0 on IIS 7.0 (Windows Server 2008). The web app connects to a SQLServer 2005 DB that is running locally on the same instance. The DB has about 300 Megs of RAM and the rest is free for web requests I presume.
It's hosted on GoGrid's cloud servers, and this instance has 1GB of RAM and 1 Core. I realize that's not much, but currently I'm the only one using the site, and I still receive these hangs.
I know it's a difficult thing to troubleshoot, but I was hoping that someone could point me in the right direction as to possible IIS configuration problems, or what the "rough" average hardware requirements would be using these technologies per 1000 users, etc. Maybe for a webserver the minimum I should have is 2 cores so that if it's busy you still get a response. Or maybe the slashdot people are right and I'm an idiot for using Windows period, lol. In my experience though, it's usually MY algorithm/configuration error and not the underlying technology's fault.
Any insights are appreciated.
What diagnistics are available to you? Can you tell what happens when the user first hits the button? Does your application see that request, and then take ages to process it, or is there a delay and then your app gets going and works as quickly as ever? Or does that first request just get lost completely?
My guess is that there's some kind of paging going on, I beleive that Windows tends to have a habit of putting non-recently used apps out of the way and then paging them back in. Is that happening to your app, or the DB, or both?
As an experiment - what happens if you have a sneekly little "howAreYou" page in your app. Does the tiniest possible amount of work, such as getting a use count from the db and displaying it. Have a little monitor client hit that page every minute or so. Measure Performance over time. Spikes? Consistency? Does the very presence of activity maintain your applicaition's presence and prevent paging?
Another idea: do you rely on any caching? Do you have any kind of aging on that cache?
Your application pool may be shutting down because of inactivity. There is an Idle Time-out setting per pool, in minutes (it's under the pool's Advanced Settings - Process Model). It will take some time for the application to start again once it shuts down.
Of course, it might just be the virtualization like others suggested, but this is worth a shot.
Is the site getting significant traffic? If so I'd look for poorly-optimized queries or queries that are being looped.
Your configuration sounds fine assuming your overall traffic is relatively low.
To many data base connections without being release?
Connecting some service/component that is causing timeout?
Bad resource release?
Network traffic?
Looping queries or in code logic?