I have an Active X control, which when first called or loaded in my asp.net application, is really slow to load. However, after the first load, it is really quick!!
My question is this, "How do I make my Active X control, when first called, load much faster? Is there away to preload the active x to the page so when it is used or called, it doesn't take so long to load?"
I have checked to see whether my active x is being called correctly by my javascript code, and it does. All my Active X does is to make a call to Outlook and to set some user properties. Not much.
Please help, but it has been doing my head in for days.
This is a known issue with the .Net Framework 1.1, 2.0, 3.0 and 3.5 You have a few options. Upgrade to the 4.0 Framework, preferably with IIS 7 which has the ability to ping your site and deep it alive. If you are not able to upgrade you could also use a ping utility to keep your site "alive". Basically what this does is it hits your site every few minutes so that you do not encounter the 20 min. default timeout period of you application domain. When you application domain times out it is reloaded on the next request, hence the slowness you are experiencing.
http://www.spikesolutions.net/ViewSolution.aspx?ID=c2b7edc0-5de1-4064-a432-05f6eded3b82
Related
I'm trying to improve app launch performance for subsequent logins (every login after the first) with my mobile app and after putting some stop watch diagnostics I can see that defining my 8 tables with MobileServiceSQLiteStore.DefineTable<T> takes on average 2.5 seconds. Every time.
On an iPhone 4 running iOS 7 the loading time would be less than a second if it weren't for having to define these tables every time. I would expect them to only need to be defined the first run of the app when the SQLite database is setup. I've tried removing the definitions on subsequent logins and try to just get the sync tables but it fails with "Table is not defined".
So, it seems this is the intended behavior. Can you explain why they need to be defined each time and/or if there is any workaround for this? It could be negligible considering my phone is pretty old now.. but it still is something I would like to remove if possible.
Yes, it is required to be called every time because SDK uses it to know how to deserialize data if you read it via untyped interface i.e. IMobileServiceSyncTable instead of IMobileServiceSyncTable<T>.
As of now there is no work around to avoid calling it each time. I'm surprised however that it is taking 2.5 seconds for you because DefineTable does not do any database operations. It merely inspects the members on your type/JObject and maintains an in memory dictionary for later re-use.
I would recommend you to download and compile the SDK and debug your way through to figure out where the time is actually spent.
I have an MVC project, and I'm looking at speeding things up. One thing I'm scratching my head over is the BeginProcesRequest() which I have no control over. Using New Relic I found that this method is, on average consuming 90% of the time required for the transaction to complete.
The code in my controller is pretty simple. It has a look for an action session for the user and redirects to their dashboard if it finds one. there isn't any database calls on the actual page. The only written is:
if (Session["UserID"] != null)
// Perform actions
The BeginProcessRequest() method takes almost 4 seconds as you can see in the screenshot
This can't be something unique to my site? I'm using a small EC2 instance for the server, and although there are other applications running on the site the CPU and memory stay pretty much at 0 throughout the request.
EDIT - Reviewed the following post:
What happens in BeginProcessRequest()?
However as my application is idle when the most time consuming requests take place I can't see how it could be related to competing threads.
I think the issue was with IIS, as after I changed the property of idle time-out in the application pool to be one day it now seems to load much faster on initial start.
I also explicitly disabled the session state on my home controller, and ensured that SQL Server's auto close parameter was set to off.
If I make a change to a razor view, recompile, or wait 15-20 minutes, a page might take anywhere from 3-20 seconds to render on that first hit. I understand that the view needs to be recompiled after a change. I also understand that the application will be unloaded after a period of inactivity, but I thought that would be a one time penalty on the very first hit. But for me it seems to apply to every single page.
Take, as an example, my homepage. According to YSlow it's a "B" with 15 components and weighing 250K (that's including MiniProfiler's extra jquery reference). From MiniProfiler I see about 500ms on the first line (http://localhost:80). I'm assuming this includes the view compilation. But then I see 1200ms for Find:Index. There are no SQL calls. Total load time on the first hit is about 3000ms, subsequent hits are about 40ms.
On another page with a couple of partial views, the parent view takes 2400ms to "Find", one of the partial views takes 1000ms to find. The parent view also takes 3200ms to "Render". And the biggest impact is on the first line (http://localhost:80/User/Dashboard) which was a whopping 7000ms. This page has only 3 queries with a total query time of 100ms. The total time to load was more than 15000ms. Subsequent hits are about 250ms.
Our setup is ASP.Net MVC 3, Ninject, EF4.2, Razor view engine, ELMAH, NLog, Html5Boilerplate, and MvcMiniProfiler. I created a duplicate project and removed Ninject, ELMAH, NLog, and MvcMiniProfiler. Performance was only marginally faster. We have about 15 controllers and about 40 views, all in one area.
Is this normal performance? When we deploy to Azure, it's even worse (naturally) than testing locally. Are there suggestions for improvement?
Edit:
A first hit after compile on IIS/localhost (in release mode and with compilation debug=false) can be about 15 seconds. The Azure deployment, running in release, has a faster first hit, but still in the range of 5-10 seconds. I tried David Ebbo's project but didn't see anything dramatic.
Do you deploy this application frequently? If so, then I can see why the first hit performance can be of concern.
We deploy often, and have created a separate project to "warm up" our deployments. It is a unit test project that uses WebDriver to hit each uncompiled view in our app after it is deployed.
Basically, you just use the WebDriver API to fire up a browser, then Navigate() to each URL that needs compiled. Run it once, and the deployment is warm.
Also, in Azure, you can turn off the idle timeout, so that your app never gets idled. We use this script:
%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.processModel.idleTimeout:00:00:00
... and run it during the Azure deployment like so:
<Task commandLine="startup\disableTimeout.cmd" executionContext="elevated" taskType="simple" />
I have an ASP.NET MVC 2 Beta application where I need to block incoming requests for a specific action until I have some data available to return or just release the request after 30 seconds with no new data available.
In order to accomplish this, I'm using AutoResetEvent.WaitOne(30000);
The big issue is that IIS does not seem to be accepting any new request while the thread is blocked at the WaitOne instruction. New requests get hung till the thread releases.
I need to be able to parallelize the requests while still keeping the WaitOne behavior.
Async handlers are what you're looking for. If you're building a comet solution, you may want to check out our .NET implementation of a comet server here, it'll save you some time. If you're wanting to roll your own, you'll definately need to use the async handlers to avoid hitting upper concurrency limits by the time you get past 60 or 70 users, but even with the async handlers, you'll still have to do some fancy footwork. Basically, you're still going to hit some upper limits in the threadpool unless you hand off the requests into a bounded thread pool that can basically manage all the incoming requests for you.
Good luck!
You should not be blocking incoming requests at all. If the data you need are not ready, then return an empty response, or perhaps return an error code.
For a web application, it is more advisable (not a hard rule) to return a message to tell the users to retry again later due to whatever reason you want to call it.
Stalling/blocking the requests by 'waiting' doesn't really help much as the wait is undeterministic, unless of course you have a mechanism to make it so.
I do not know the nature/context/traffic pattern of your website. 30 seconds can be a number that works for you. Perhaps my points above are not really relevant, just my 2 cents.
Actually, it turns out that this behavior only happens with ASP.NET MVC 2 Beta. I had this working fine with MVC 2 Preview 2 and rolled back to this version to re-test and confirmed that the application worked fine with that version.
Now, the question is: Why am I seeing this different behavior between these two MVC release versions, and what is the correct behavior I should expect to get in this scenario?
I used System.Timers.timer in global.as in asp.net to set a timer for scheduling execute a
function
let' say transferMoney().
But it seems that this timer might stop after several hours unexpected.
And this cause that all the actions are pending.
I want to know whether there are any better methods to set up a timer in asp.net, MVC 1.0?
Thanks in advance!
It might just be because the application got recycled. Global.ashx is not really the right place to do long running tasks because if your AppDomain gets recycled your timer will die. I suggest making a job windows service instead.
Edit: Well, it's fairly easy to create a windows service project in Visual Studio just do [File] > [Add] > [New Project...] > [Windows] > [Windows Service] and you will get the stub code for the project.
It's hard to come up with a complete example so i suggest you google it. ;) There are tons of samples out there for you to look at.
This article on CodeProject seems to be a good introduction to Windows Services.
Any timer you'll use in ASP.NET apps will eventually "terminate", but this a very expected behavior due to process recycling.
The timer will never work because IIS will reschedule the worker process regularly based on Application Pool settings, so when it recycles your timer will get destroyed and you might need to reopen it.
You can put a check on whether timer object is still available or not, if not available then create it !!, using any other timer object will not work. But this still has a problem, because if you dont have any web request for particular period of time, it will still get destroyed. Best is to setup a ping monitor from other place which can keep your website alive.
You can't reliably run a timer in ASP.NET. If there are no requests coming in, the IIS can shut down the application, and it will not start until the next request arrives.
Why do you think that you need a timer? In most web applications this is not needed at all to do periodical updates unless they depend on an external source.
If you are just moving data around inside your application, the actual transactions doesn't have to happen at an exact interval, you only have to calculate what the result would be if they had happened. Whenever a request comes in, you calculate how many transactions would have happened since the last request, and do them to catch up to the current state.
If your transactions rely on an external source so that they actually has to run at a specific time, you simply can't do it with ASP.NET alone. You need an application that runs outside IIS, for example started periodically by the windows scheduler.
You could try the system.threading.timer
http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx