I have this web app that is running ASP .NET MVC 1.0 with LINQ 2 SQL
I'm noticing a very strange problem with LINQ 2 SQL throwing exceptions (mainly Specified cast invalid or Sequence contains more than one element) when under a certain amount of load.
The bigger problem is, I'm not talking about Real Heavy/Professional Stress Testing... Basically what I'm doing is I open FireFox and Chrome and hold down F5 for ten seconds in each (I call this poor man stress testing) - lo and behold; the web app is throwing these exceptions randomly for the next two or five minutes. If I restart the app from IIS7 (or restart WebDev if under Visual Studio) then immediately all is back to normal. Like nothing happened.
At first I was suspecting the way I handle the DataContext, maybe i'm supposed to dispose it at every Application_End from Global.asx, but that didn't change anything.
Right now I have a single public static DataContext object used by all requests. I'm not disposing it or re-creating it. Is that the right way to do it? Am I supposed to dispose it? When exactly should I dispose it?
There are several things that happen on every request - for example, in every page, the User object (for the current user) is loaded from the database and "LastSeen" attribute is updated to DateTime.Now. Other things (like Tag Cloud for example) are cached.
Any ideas why this is happening?
The DataContext class is not threadsafe - you need to create a new one for each operation. See this article by Rick Strahl (Linq to SQL DataContext Lifetime Management)
You should dispose the datacontext after every bunch of queries, and use it like
using(MyDataContext dc = new MyDataContext())
{
var x = dc.Table.Single(a=>a.Id=3);
//do some more related stuff, but make sure your connection won't be open too long
}
Don't have one static datacontext be used by every request. Would you use the same Connection object in normal ADO.Net for every request?!
See also http://blog.codeville.net/2007/11/29/linq-to-sql-the-multi-tier-story/
DataContext isn’t thread-safe, as far
as I know
You lose isolation and
cannot control when SubmitChanges() is
called - concurrent requests will
interfere with one another
Memory
leaks are pretty likely
Create a DC for every operation like Rob suggests OR use a IoC container to have a DC shared per request.
DO NOT DISPOSE DC's - they are designed to be lightweight. Disposing is not only not necessary but it can cause bad practices and maybe other threading issues in the future which may be even harder to track down.
Related
I'm running a db4o server with multiple clients accessing it. I just ran into the issue of one client not seeing the changes from another client. From my research on the web, it looks like there are basically two ways to solve it.
1: Call Refresh() on the object (from http://www.gamlor.info/wordpress/2009/11/db4o-client-server-and-concurrency/):
const int activationDeph = 4;
client2.Ext().Refresh(objFromClient2, activationDeph);
2: Instead of caching the IObjectContainer, open a new IObjectContainer for every DB request.
Is that right?
Yes, #1 is more efficient, but is that really realistic to specify which objects to refresh? I mean, when a DB is involved, every time a client accesses it, it should get the latest information. That's why I'm leaning towards #2. Plus, I don't have major efficiency concerns.
So, am I right that those are the two approaches? Or is there another?
And, wait a sec... what happens when your object goes out of scope? On a timer, I call a method that gets an object from the DB server. That method instantiates the object. Since the object went out of scope, it's not there to refresh. And when I call the DB, I don't see the changes from the client. In this case, it seems like the only option is to open a new IObjectContainer. No?
** Edit **
I thought I'd post some code using the solution I finally decided to use. Since there were some serious complexities with using a new IObjectContainer for every call, I'm simply going to do a Refresh() in every method that accesses the DB (see Refresh() line below). Since I've encapsulated my DB access into logic classes, I can make sure to do the Refresh() there, every time. I just tested this and it seems to be working.
Note: The Database variable below is the the db4o IObjectContainer.
public static ApplicationServer GetByName(string serverName)
{
ApplicationServer appServer = (from ApplicationServer server in Database
where server.Name.ToUpperInvariant() == serverName.ToUpperInvariant()
select server).FirstOrDefault();
Database.Ext().Refresh(appServer, 10);
return appServer;
}
1) As you said, the major problem with this that you usually really don't know what objects to refresh.
You can use the committed event to refresh objects as soon as any client has committed. db4o will distribute that event. Note that this also consumes some network traffic & time to send the events. And there will be a time frame where your objects have a stale state.
2) It actually the cleanest method, but not for every db request. Use a object container for every logical unit of work. Any operation which is one 'atomic' unit of work in your business-operations.
Anyway in general. db4o was never build with the client server scenario as first priority, and it shows in the concurrent scenarios. You cannot avoid working with stale (and even inconsistent) object state and there is no concurrency control options (except the low level semaphores).
My recommendation: Use a client container per unit of work. Be aware that even then you might get stale data, which then might lead to a inconsistent view & update. When there are rarely any contentions & races in your application scenario and you can tolerate a mistake once in a while, then this is fine. However if you really need to ensure correctness, then I recommend to use a database which has a better concurrency store =(
The context: a web application written in asp.net MVC + NHibernate. This is a card game where players play at the same time so their actions might modify the same field of an entity at the same time (they all do x = x + 1 on a field). Seems pretty classical but I don't know how to handle it.
Needless to say I can't present a popup to the user saying "The entity has been modified by another player. Merge or cancel ?". When you think that this is related to an action to a card, I can't interfere like this. My application has to internally handle this scenario. Since the field is in an entity class and each session has it own instance of the entity, I can't simply take a CLR lock. Does it mean I should use pessimistic concurrency so that each web request acting on this entity is queued until a player finished his action? In practical terms in means that each PlayCard request should use a lock?
Please, don't send me to NH doc about concurrency or alike. I'm after the technique that should be used in this case, not how to implement it in NH.
Thanks
It may make sense depending on your business logic to try second level caching. This may be a good depending on the length of the game and how it is played. Since the second level cache exists on the session factory level, the session factory will have to be managed according to the life time of the game. An Nh session can be created per request, but being spawned by a session factory configured for second level cache means data of interest is cached across all sessions. The advantage of using second level cache is that you can configure this on a class by class basis - caching only the entities your require. It also provides a variety of concurrency strategies depending on the cache provider. Even though this may shift the concurrency issue from the DB level to the NH session, this may give you a better option for dealing with your situation. There are gotchas to using this but it's suitability all depends on your business logic.
You can try to apply optimistic locking in this way:
DB entity will have a column tracking entity version (nhibernate.info link).
If you get "stale version" exception while saving entity ( = modified by another user) - reload the entity and try again. Then send the updated value to the client.
As I understand your back-end receives request from the client, then opens session, does some changes and updates entities closing session. In this case no thread will hold one entity in memory for too long and optimistic locking conflicts shouldn't happen too often.
This way you can avoid having many locked threads waiting for operation to complete.
On the other hand, if you expect retries to happen too often you can try SELECT FOR UPDATE locking when loading your entity (using LockMode.Upgrade in NH Get method). Although I found the thread that discourages me from using this with SQL Server: SO link.
In general the solution depends on the logic of the game and whether you can resolve concurrency conflicts in your code without showing messages to users. I'd also made UI updating itself with the latest data often enough to avoid players acting on obsolete game situation and then be surprised with the outcome.
I'm testing ASP.NET MVC web application and I use Lucene index files. For every test I need to rebuild Lucene index and then to force my web application to re-read these index files. The only way I've found is to recycle application pool, but it's rather slow.
Does anyone know a way to re-read files from disc without recycling application pool?
It seems like you might not be calling Close on the IndexWriter/Reader/Searchers that you use while performing your tests. If you don't do that and you are using the FSDirectory class (which represents a file system), then lock files are created which prevent the opening of the indexes in the directories.
That said, make sure to call the Close method on any objects that expose one when your test is complete. Make sure to use a try/finally block to ensure that the objects are closed.
Personally I've created an extension method which takes an object and returns an IDisposable implementation which will call Close when Dispose is called, allowing it to be used in using statements (I use reflection on the type to get the Close method and then I generate a lambda expression which is called in the Dispose method).
Also, if you are running a test harness and you are opening and closing the indexes in your text fixtures you either have to make sure that:
The tests that access the indexes are run synchronously so they don't try to open locked directories
OR
Have one test class for all search-related tests and handle the opening and closing of the indexes in whatever setup and teardown mechanisms exist for your test harness. You should also do the population of your index in the setup as well (and not make it one of the test cases, otherwise, you will have synchronization problems).
I'm designing an application that relies heavily on a database. I need my application to be resilient to short losses of connectivity to the database (network going down for a few seconds for example). What is the usual patterns that people use for these kind of problems. Is there something that I can do on the database access layer to handle gracefully a small glitch in the network connection to the db (i'm using hibernate + oracle jdbc + dbcp pool).
I'll assume you have hidden every database access behind a DAO or something similiar.
Now create wrappers around these DAOs, that try to call them, and in case of an exception wait a second and retry. Of course this will cause 'hanging' of the application during db-outage, but it will come back to live when the database becomes available.
If this is not acceptable you'll have to move the cut up closer to the ui layer. Consider the following approach.
User causes a
request.
wrap all the request information in a message and put it in the queue.
return to the user, telling him that his request will get processed in a short time.
A worker registered on the queue will process the request, retrying when database problems exist.
Note that you are now deep in concurrency land. So you must handle things like requests referencing an entity which already got deleted.
Read up on 'eventual consistency'
Since you are using hibernate, you'll have to deal with lazy loading. An interruption in connectivity will kill your session, so for you it might be best not to use lazy loading at all, but work with detached objects.
I am using a TWebModule with Apache. If I understand correctly Apache will spawn another instance of my TWebModule object if all previous created objects are busy processing requests. Is this correct?
I have created my own SessionObject and have created a TStringList to store them. The StringList is created in the initialization section at the bottom of my source code file holding the TWebModule object. I am finding initialization can be called multiple times (presumably when Apache has to spawn another process).
Is there a way I could have a global "Sessions" TStringlist to hold all of my session objects? Or is the "Safe", proper method to store session information in a database and retrieve it based on a cookie for each request?
The reason I want this is to cut down on database access and instead hold session information in memory.
Thanks.
As Stijn suggested, using a separate storage to hold the session data really is the best way to go. Even better is to try to write your application so that the web browser contains the state inherently in the design. This will greatly increase the ability to scale your application into the thousands or tens of thousands of concurrent users with much less hardware.
Intraweb is a great option, but suffers from the scale issue in the sense that more concurrent users, even IDLE users, require more hardware to support. It is far better to design from the onset a method of your server running as internally stateless as possible. Of course if you have a fixed number of users and don't expect any growth, then this is less of an issue.
That's odd. If initialization sections get called more than once, it might be because the DLL is loaded in separate process spaces. One option I can think up is to check if the "Sessions" object already exists when you create it on initialization. If the DLL really is loaded in separate processes, this will not help, and then I suggest writing a central Session storage process and use inter-process-communication from within your TWebModule (there are a few methods: messages, named pipes, COM...)
Intraweb in application mode really handles session management and database access very smoothly, and scales well. I've commented on it previously. While this doesn't directly answer the question you asked, when I faced the same issues Intraweb solved them for me.