The context: a web application written in asp.net MVC + NHibernate. This is a card game where players play at the same time so their actions might modify the same field of an entity at the same time (they all do x = x + 1 on a field). Seems pretty classical but I don't know how to handle it.
Needless to say I can't present a popup to the user saying "The entity has been modified by another player. Merge or cancel ?". When you think that this is related to an action to a card, I can't interfere like this. My application has to internally handle this scenario. Since the field is in an entity class and each session has it own instance of the entity, I can't simply take a CLR lock. Does it mean I should use pessimistic concurrency so that each web request acting on this entity is queued until a player finished his action? In practical terms in means that each PlayCard request should use a lock?
Please, don't send me to NH doc about concurrency or alike. I'm after the technique that should be used in this case, not how to implement it in NH.
Thanks
It may make sense depending on your business logic to try second level caching. This may be a good depending on the length of the game and how it is played. Since the second level cache exists on the session factory level, the session factory will have to be managed according to the life time of the game. An Nh session can be created per request, but being spawned by a session factory configured for second level cache means data of interest is cached across all sessions. The advantage of using second level cache is that you can configure this on a class by class basis - caching only the entities your require. It also provides a variety of concurrency strategies depending on the cache provider. Even though this may shift the concurrency issue from the DB level to the NH session, this may give you a better option for dealing with your situation. There are gotchas to using this but it's suitability all depends on your business logic.
You can try to apply optimistic locking in this way:
DB entity will have a column tracking entity version (nhibernate.info link).
If you get "stale version" exception while saving entity ( = modified by another user) - reload the entity and try again. Then send the updated value to the client.
As I understand your back-end receives request from the client, then opens session, does some changes and updates entities closing session. In this case no thread will hold one entity in memory for too long and optimistic locking conflicts shouldn't happen too often.
This way you can avoid having many locked threads waiting for operation to complete.
On the other hand, if you expect retries to happen too often you can try SELECT FOR UPDATE locking when loading your entity (using LockMode.Upgrade in NH Get method). Although I found the thread that discourages me from using this with SQL Server: SO link.
In general the solution depends on the logic of the game and whether you can resolve concurrency conflicts in your code without showing messages to users. I'd also made UI updating itself with the latest data often enough to avoid players acting on obsolete game situation and then be surprised with the outcome.
Related
I am currently implementing a web application in .net core(C#) using entity framework. While working on the project, I actually encountered quite a few challenges but I will start with the one which I think are most important. My questions are as follows:
Instead of frequent loading data from the database, I am having a set of static objects which is a mirror of the data in the database. However, it is tedious and error prone when I want to ensure any changes, i.e., adding/deleting/modifying of objects are being saved to the database at real time. Is there any good example or advice that I can refer to improve my approach to do this?
Another thing is that value of some objects' properties will be changed on the fly according to the value of some other objects' properties. Something like a spreadsheet where a cell's value will be changed automatically if the value in the cell that the formula is referring to changes. I do not have a solution to do this yet. Appreciate if anyone has any example that I can refer to. But this will add another layer of complexity to sync the changes of the objects in memory to database.
At the moment, I am unsure if there is any better approach. Appreciate if anyone can help. Thanks!
Basically, you're facing a problem that's called eventual consistency. Something changes and two or more systems need to be aware at the same time. The problem here is that both changes need to be applied in order to consider the operation successful. If either one fails, you need to know.
In your case, I would use the Azure Service Bus. You can create queues and put messages on a queue. An Azure Function would handle these queue messages. You would create two queues, one for database updates, and one for the in-memory update (I think changing this to a cache service may be something to think off). Now the advantage of these queues is that you can easily drop messages on these queues from anywhere. Because you mentioned the object is going to evolve, you may need to update these objects either in the database or in memory (cache).
Once you've done that, I'd create a topic, with two subscriptions. One forwarding messages to Queue 1, and the other to Queue 2. This will solve your primary problem. In case an object changes, just send it to the topic. Both changes (database and memory) will be executed automagically.
The only problem you have now, it that you mentioned you wanted to update the database in real-time. With this scenario, you're going to have to leave that.
Also, you need to make sure you have proper alerts in place for the queues so in case you did miss a message, or your functions didn't handle it well enough, you'll receive an alert to check & correct errors.
I'm totally agree with #nineedm's and answer, but there are also other solutions.
If you introduce cache, you will always face cache revalidation problem - you have to mark cache as invalid when data were changed. Sometimes it is easy, depending on nature of cached data and how often data are changed.
If you have just single application, MemoryCache can be enough with proper specified expiration options.
If there is a cluster - you have to look at Distributed Cache solutions, for example Redis. There is MS article about that Distributed caching in ASP.NET Core
Is it possible to commit a transaction in the background while already returning the view to the user when using nHibernate in an ASP.NET MVC application?
So upon reaching "ActionExecuted", which is normally the point at which the transaction is committed when using the session-per-request pattern, I want to continue right away while nHibernate starts committing. This would allow the user to see the resulting view earlier.
Instead of committing in the background (which you can do with a Thread, as long as you make sure the session gets cleaned afterwards), why not switch to a queue-based architecture?
Advantages:
It's actually designed for what you want, not a hack
You can scale out as far as you want (same app, different app, different server, different datacenter...)
If you build it correctly, it can offer even more reliability than a straight-to-db approach
Of course, there is a cost, which is creating the DTOs for the queue and then building the actual transaction. And also, the request is not really finished when you return control to the user (this is non-deterministic. The next request might find the data in the db or not)
I have an old SL4/ria app, which I am looking to replace with breeze. I have a question about memory use and caching. My app loads lists of Jobs (a typical user would have access to about 1,000 of these jobs). Additionally, there are quite a few lookup entity types. I want to make sure these are cached well client-side, but updated per session. When a user opens a job, it loads many more related entities (anywhere from 200 - 800 additional entities) which compose multiple matrix-style views for the jobs. A user can view the list of jobs, or navigate to view 1 job at a time.
I feel that I should be concerned with memory management, especially not knowing how browsers might deal with this. Originally I felt this should all be 1 EntityManager and I would detachEntities when user navigates away from a job, but I'm thinking this might benefit from multiple managers by intended lifetime. Or perhaps I should create a new dataservice & EntityManager each time the user navigates to a new hash '/#/' area, since comments on clear() seems to indicate that this would be faster? If I did this, I suppose I will be using pub/sub to notify other viewmodels of changes to entities? This seems complex and defeating some of the benefits of breeze as the context.
Any tips or thoughts about this would be greatly appreciated.
I think I understand the question. I think I would use a multi-manager approach:
Lookups Manager - holds once-per session reference (lookup) entities
JobsView Manager - "readonly" list of Jobs in support of the JobsView
JobEditor Manager - One per edit session.
The Lookups Manager maintains the canonical copy of reference entities. You can fill it once with a single call to server (see docs for how). This Lookups Manager will Breeze-export these reference entities to other managers which Breeze-import them as they are created. I am assuming that, while numerous and diverse, the total memory footprint of reference entities is pretty low ... low enough that you can afford to have more than one copy in multiple managers. There are more complicated solutions if that is NOT so. But let that be for now.
The JobsView Manager has the necessary reference entities for its display. If you only displayed a projection of the Jobs, it would not have Jobs in cache. You might have an array and key map instead. Let's keep it simple and assume that it has all the Jobs but not their related entities.
You never save changes with this manager! When editing or creating a Job, your app always fires up a "Job Editor" view with its own VM and JobEditor Manager. Again, you import the reference entities you need and, when editing an existing Job, you import the Job too.
I would take this approach anyway ... not just because of memory concerns. I like isolating my edit sessions into sandboxes. Eases cancellation. Gives me a clean way to store pending changes in browser storage so that the user won't lose his/her work if the app/browser goes down. Opens the door to editing several Jobs at the same time ... without worrying about mutually dependent entities-with-changes. It's a proven pattern that we've used forever in SL apps and should apply as well in JS apps.
When a Job edit succeeds, You have to tell the local client world about it. Lots of ways to do that. If the ONLY place that needs to know is the JobsView, you can hardcode a backchannel into the app. If you want to be more clever, you can have a central singleton service that raises events specifically about Job saving. The JobsView and each new JobEditor communicate with this service. And if you want to be hip, you use an in-process "Event Aggregator" (your pub/sub) for this purpose. I'd probably be using Durandal for this app anyway and it has an event aggregator in the box.
Honestly, it's not that complicated to use and importing/exporting entities among managers is a ... ahem ... breeze. Well worth it compared to refreshing the Jobs List every time you return to it (although you'll want a "refresh button" too because OTHER users could be adding/changing those Jobs). You retain plenty of Breeze benefits: querying, validation, change-tracking, batch saves, entity navigation (those reference lists work "for free" in Breeze).
As a refinement, I don't know that I would automatically destroy the JobEditor view/viewmodel/manager when I returned to the JobsView. In my experience, people often return to the same Job that they just left. I might hold on to a view so you could go back and forth quickly. But now I'm getting tricky.
I'm running a db4o server with multiple clients accessing it. I just ran into the issue of one client not seeing the changes from another client. From my research on the web, it looks like there are basically two ways to solve it.
1: Call Refresh() on the object (from http://www.gamlor.info/wordpress/2009/11/db4o-client-server-and-concurrency/):
const int activationDeph = 4;
client2.Ext().Refresh(objFromClient2, activationDeph);
2: Instead of caching the IObjectContainer, open a new IObjectContainer for every DB request.
Is that right?
Yes, #1 is more efficient, but is that really realistic to specify which objects to refresh? I mean, when a DB is involved, every time a client accesses it, it should get the latest information. That's why I'm leaning towards #2. Plus, I don't have major efficiency concerns.
So, am I right that those are the two approaches? Or is there another?
And, wait a sec... what happens when your object goes out of scope? On a timer, I call a method that gets an object from the DB server. That method instantiates the object. Since the object went out of scope, it's not there to refresh. And when I call the DB, I don't see the changes from the client. In this case, it seems like the only option is to open a new IObjectContainer. No?
** Edit **
I thought I'd post some code using the solution I finally decided to use. Since there were some serious complexities with using a new IObjectContainer for every call, I'm simply going to do a Refresh() in every method that accesses the DB (see Refresh() line below). Since I've encapsulated my DB access into logic classes, I can make sure to do the Refresh() there, every time. I just tested this and it seems to be working.
Note: The Database variable below is the the db4o IObjectContainer.
public static ApplicationServer GetByName(string serverName)
{
ApplicationServer appServer = (from ApplicationServer server in Database
where server.Name.ToUpperInvariant() == serverName.ToUpperInvariant()
select server).FirstOrDefault();
Database.Ext().Refresh(appServer, 10);
return appServer;
}
1) As you said, the major problem with this that you usually really don't know what objects to refresh.
You can use the committed event to refresh objects as soon as any client has committed. db4o will distribute that event. Note that this also consumes some network traffic & time to send the events. And there will be a time frame where your objects have a stale state.
2) It actually the cleanest method, but not for every db request. Use a object container for every logical unit of work. Any operation which is one 'atomic' unit of work in your business-operations.
Anyway in general. db4o was never build with the client server scenario as first priority, and it shows in the concurrent scenarios. You cannot avoid working with stale (and even inconsistent) object state and there is no concurrency control options (except the low level semaphores).
My recommendation: Use a client container per unit of work. Be aware that even then you might get stale data, which then might lead to a inconsistent view & update. When there are rarely any contentions & races in your application scenario and you can tolerate a mistake once in a while, then this is fine. However if you really need to ensure correctness, then I recommend to use a database which has a better concurrency store =(
I'm designing an application that relies heavily on a database. I need my application to be resilient to short losses of connectivity to the database (network going down for a few seconds for example). What is the usual patterns that people use for these kind of problems. Is there something that I can do on the database access layer to handle gracefully a small glitch in the network connection to the db (i'm using hibernate + oracle jdbc + dbcp pool).
I'll assume you have hidden every database access behind a DAO or something similiar.
Now create wrappers around these DAOs, that try to call them, and in case of an exception wait a second and retry. Of course this will cause 'hanging' of the application during db-outage, but it will come back to live when the database becomes available.
If this is not acceptable you'll have to move the cut up closer to the ui layer. Consider the following approach.
User causes a
request.
wrap all the request information in a message and put it in the queue.
return to the user, telling him that his request will get processed in a short time.
A worker registered on the queue will process the request, retrying when database problems exist.
Note that you are now deep in concurrency land. So you must handle things like requests referencing an entity which already got deleted.
Read up on 'eventual consistency'
Since you are using hibernate, you'll have to deal with lazy loading. An interruption in connectivity will kill your session, so for you it might be best not to use lazy loading at all, but work with detached objects.