Hi I have project developed in ASP.NET MVC. I am using Entity framework database first approach. Problem I am facing now is, first time when user loads my website, it takes time to load which is unusual. So I am searching for the reason behind this. Can anyone help me in this?
For a moment, I am assuming since I am using database first approach, I have to load all database to my project first before i connect. So I think it is making my website to load slower.
If this is the case, can someone guide me to generate connection string dynamically during run time in entity framework and connect to database?
I have stored procedure from which I can get connection string. But I dont know how can i use that connection string and connect to database.
When you say “first time when user loads my website, it takes time to load which is unusual” could you give us a comparison of how slow is slow the first time. And by “first time” is you mean after re-starting IIS or re-starting the app-pool, it may take some time as some of your code would have to be turned into native code by the just-in-time computer.
EF may slow you down if you have many hundreds of tables and it has to re-construct the context based on all of those tables and relationships. Even if it is the case, you would see the delay in every time you create a new EF context. But if this is the case, you may have to create multiple contexts each with a limited number of tables to manage. A good read on that http://msdn.microsoft.com/en-us/magazine/jj883952.aspx
Other than that, I would suggest you do the usual things such as pre compile your views (http://geekswithblogs.net/Aligned/archive/2013/05/28/pre-compiling-your-mvc-views.aspx) and use caching
Love to hear how it went
Related
I'm trying to improve app launch performance for subsequent logins (every login after the first) with my mobile app and after putting some stop watch diagnostics I can see that defining my 8 tables with MobileServiceSQLiteStore.DefineTable<T> takes on average 2.5 seconds. Every time.
On an iPhone 4 running iOS 7 the loading time would be less than a second if it weren't for having to define these tables every time. I would expect them to only need to be defined the first run of the app when the SQLite database is setup. I've tried removing the definitions on subsequent logins and try to just get the sync tables but it fails with "Table is not defined".
So, it seems this is the intended behavior. Can you explain why they need to be defined each time and/or if there is any workaround for this? It could be negligible considering my phone is pretty old now.. but it still is something I would like to remove if possible.
Yes, it is required to be called every time because SDK uses it to know how to deserialize data if you read it via untyped interface i.e. IMobileServiceSyncTable instead of IMobileServiceSyncTable<T>.
As of now there is no work around to avoid calling it each time. I'm surprised however that it is taking 2.5 seconds for you because DefineTable does not do any database operations. It merely inspects the members on your type/JObject and maintains an in memory dictionary for later re-use.
I would recommend you to download and compile the SDK and debug your way through to figure out where the time is actually spent.
I use rails to present automated hardware testing results; our tests are run mainly via TCL. Recently, we have implemented a "log4TCL" which is basically a translated version of log4J. The log files have upwards of 40000 lines, each of which is written to the database as a logline record, and load time for the view is too long to be considered usable. I have tried to use ajax requests to speed things up, but the initial query/page load accounts for ~75% of the full page load.
My solution is page caching. I cannot use the rails included page caching because each log report is a different instance of "log_viewer". The report is generated using a test_run_id parameter. Rails-included page caching only caches one instance of "log_viewer.html". What I need is "log_viewer_#{test_run_id}.html". I have implemented a way of doing this. The reports age out after one week and are purged from the test_runs/log_viewer_cache directory to save disk space. If an older report is needed, loading the page re-generates the report with a fresh age-out timer.
I have come to the conclusion that this is the way to go. My concern is that I have not found any other implementations such as this anywhere which leads me to believe that I have missed an inherent flaw in my design. Any input would be much appreciated.
EDIT: For clarification, the "Dynamic" content of this report is what takes too long to load. I need to cache multiple instances of what action/fragment caching is not concerned with.
I am currently creating an iOS app, which connects to a database and asynchronously downloads a JSON object of data to display in a table view.
As it currently stands, this is an ok way to do it. However, when the database starts getting much larger, this will cause a massive inconvenience. I'm reasonably proficient in Objective-C but not so much in the database side of things. What would be the best way to get this data from the server, and keep it in the app? At the moment, I have a custom class object storing the data for each of the 'objects' in the JSON object. There will however be many other aspects of the app that the database will handle, such as invites, logins and user details.
Would core data be the way to go? I.e duplicating the database (to a certain extent) and storing it locally, then accessing from there. As I said, i'm not really sure which route to take here, so any advice would be real appreciated.
Core location is for handling location (satellite (and wifi) positionning).
I guess you mean Core Data. Core Data is a graph object model which allows you to manipulate data as objects. You don't dig directly into the database, you ask for objects instanciation through predicates (kind of where clause in SQL) and the manipulate the objects.
This stated, it all depends on what is a "big" database. If it's really big you could consider copying locally a part of it and ask for what's remaining from the server through your webservice.
Another question that you could ask yourself is the quantity of data that never change and if your website database and your app database needs to get synchronized (if your website database is always changing then it would be dumb to copy it in your app totally and always synced your app..).
Links :
Introduction to Core Data
Difference between Core Data and a Database (Cocoa With Love)
edit :
A question you can ask yourself is where your data needs to be saved ?
if your app is just for printing 20 cells out of a total of 200 cells then i would go for a total download of your 200 cells. The load of the other cells will be with no delay after first download, especially appreciated if you're using table view cells with reusable cells
is a delay of some seconds acceptable between the 20 first cells and the 20 following ? I think there is no real "good" answer to your question, it depends on many factors (purpose of your app, acceptable time between loads, does the info needs to be modified and saved back to server or locally, what kind of customers, what your app will do with the cells, if you have a database locally will it be totally independant from "mother" database (if no, what kind of synchronization), etc.)
Trying to sum up things according to what I've understood of your needs, I would say that webservices is good if you just need to retrieve info and exploiting it after without saving it back (even if you can do it actually having services allowing you to do it), having a database locally is good if you need your app to be independant from your server in some ways.
Only you has the key to answer all this and take a decision according to your needs and your knowledge of your application and your customers.
Something like JSON or SOAP is the way to go with getting structured data from a web service into objects in your iPhone app.
Storing relational data on the iPhone itself is easy with SQLite. Here's a decent looking tutorial.
Make things easy for yourself by writing a data layer, abstracting away calls to the database, to avoid dotting SQL queries all over your code in places it shouldn't be, like the UI.
I have a project which provides users with a list of current tasks that need to be completed. Any user can complete any task, and so to ensure that only one user is working on a task at a time I need to be able to 'lock' it. I'm using SignalR for this, so a user requests a lock on a task, and if they are successful (ie. if noone else has locked it) then they will be able to access the further information that they need.
My problem is how to store the list of locked tasks. The original plan was simply to add an additional bit field 'IsLocked' to the Task table and update this when the user requested a lock and when the task was unlocked. We have about 300 concurrent users, however, and a task takes only about 3-4 minutes, meaning huge numbers of additional - and tiny - queries on the database. Therefore we were wondering about in-memory storage, simply storing a list of task ids in a 'lockedTasks' list.
I had considered using caching, but am unsure on the best ways to do this, or even if better alternatives exist. If anyone has any experience in this then some advice would be great thanks
I would avoid memory completely as IIS is not that great with it, if you found your self in the IIS need for refreshing the Application Pool for some sort of reason, your list is simply gone!
Maybe a MemCache system? If it does not loose things in the above way, but...
I would advice to be in the middle, IO File is fast that request data to a Database, specially if it's not in the same machine (witch for security reasons, it should never be), so... why not, and just to hold your list, you don't use one of the currently famous NoSQL database?
MongoDB is a document database that has a .NET Library and it's easy to use, it is not as fast as Memmory, but extremely quicker than Physical databases for what you want.
Normally the NoSQL Database will be hosted in the App_Data folder so it will be extremely fast to access and you can just hold there the task_id and user_id of all locked tasks.
Have you considered stateful filters?
Check out this links for more info:
ASP.NET MVC Filters and Statefulness
Brad Wilson: Advanced MVC
3 - (Video)
Brad Wilson: Advanced MVC 3 - (PDF)
I'm sorry, but if your app can't handle a single query every 3-4 minutes x 300 users, then you're doing something very wrong. Just browsing a site typically generates orders of magnitude more queries than that.
I'm working on an application that works like a search engine, and all the time it has workers in the background searching the web and adding results to the Results table.
While everything works perfectly, lately I started getting huge response times while trying to browse, edit or delete the results. My guess is that the Results table is being constantly locked by the workers who keep adding new data, which means web requests must wait until the table is freed.
However, I can't figure out a way to lower that load on the Results table and get faster respose times for my web requests. Has anyone had to deal with something like that?
The search bots are constantly reading and adding new stuff, it adds new results as it finds them. I was wondering if maybe by only adding the bulk of the results to the database after the search would help, or if it would make things worse since it would take longer.
Anyway, I'm at a loss here and would appreciate any help or ideas.
I'm using RoR 2.3.8 and hosting my app on Heroku with PostgreSQL
PostgreSQL doesn't lock tables for reads nor writes. Start logging your queries and try to find out what is going on. Guessing doesn't help, you have to dig into it.
To check the current activity:
SELECT * FROM pg_stat_activity;
Try the NOWAIT command. Since you're only adding new stuff with your background workers, I'd assume there would be no lock conflicts when browsing/editing/deleting.
You might want to put a cache in front of the database. On Heroku you can use memcached as a cache store very easily.
This'll take some load off your db reads. You could even have your search bots update the cache when they add new stuff so that you can use a very long expiration time and your frontend Rails app will very rarely (if ever) hit the database directly for simple reads.