ASP.NET MVC: Repository pattern high concurrency updates - asp.net-mvc

I'm writing an app that we may be switching out the repository later (currently entity framework) to use either amazon or windows azure storage.
I have a service method that disables a user by the ID, all it does is set a property to true and set the DisabledDate. Should I call to the repository, get that user, set the properties in the service, then call to the save function in the repository? If I do this, then thats 2 database calls, should I worry about this? What if the user is updating the profile at the same time the admin is calling the disable method, and calls the user calls the save method in the repository (which currently holds false for the IsDisabled property?) Wouldn't that set the user back to being enabled if called right after the disabled method?
What is the best way to solve this problem? How do I update data in a high concurrent system?

CustomerRepository:
// Would be called from more specific method in Service Layer - e.g DisableUser
public void Update(Customer c)
{
var stub = new Customer { Id = c.Id }; // create "stub"
ctx.Customers.Attach(stub); // attach "stub" to graph
ctx.ApplyCurrentValues("Customers", c); // override scalar values of "stub"
ctx.SaveChanges(); // save changes - 1 call to DB. leave this out if you're using UoW
}
That should serve as a general-purpose "UPDATE" method in your repository. Should only be used when the entity exists.
That is just an example - in reality you should/could be using generics, checking for the existence of the entity in the graph before attaching, etc.
But that will get you on the right track.

As long as you know the id of the entity you want to save you should be able to do it by attaching the entity to the context first like so:
var c = new Customer();
c.Id = someId;
context.AttachTo("Customer", c)
c.PropertyToChange = "propertyValue";
context.SaveChanges();
Whether this approach is recommended or not, I'm not so sure as I'm not overly familiar with EF, but this will allow you to issue the update command without having to first load the entity.

Related

DBContext (entity framework) and pre-loaded entities

I use code first in a web application where I have a form to upload text files and import the data into my database.
Each file may have up to 20.000+ records for import.
To speed things up I preload some entities so not to ask the DbContext every time. Then when I create an object for insert, I do for example:
myNewObject.Category = preloadedCategories.First(p => p.Code == code);
I have read some articles on the web because EF is extremey slow on batch inserts, so what I do is:
first use Configuration.AutoDetectChangesEnabled = false;
then every 1000 records I dispose the object and make a new one.
BUT! since the preloaded entities where loaded from a db context that was disposed, after making a new DbContext, I have a problem with preloadedCategories.First(p => p.Code == code). When I make a SaveChanges(), EF tries to also save the preloadedCategories.First(p => p.Code == code) object and fails.
So how can I achive this? I don't want to aks the DbContext every time to load some (non changing) objects. Is it possible?
thanks
When dealing with a large number of records in EF, a few things will help
As #janhartmann states, use .AsNoTracking()
As you stated, use Configuration.AutoDetectChangesEnabled = false, which will require the next point
Use context.Categories.Entry(category).State = EntityState.Modified to attach a disconnected entity to a context and mark is as modified
Also make check that preloadedCategories is no longer an IQuerable and that the data really is local and not trying to lazy load from the database.
If there are no changes to your Category object and you just want to link your myNewObject to an existing category, you have two options
Set the foreign key on myNewObject instead of the navigation property
Use context.Products.Entry(myNewObject).State = EntitySate.Added instead of context.Products.Add(myNewObject) to avoid it adding the entire graph of navigation properties
Good luck

Breeze BeforeSaveEntityonly only allows update to Added entities

Don't know if this is intended or a bug, but the following code below using BeforeSaveEntity will only modify the entity for newly created records (EntityState = Added), and won't work for modified, is this correct?
protected override bool BeforeSaveEntity(EntityInfo entityInfo)
{
var entity = entityInfo.Entity;
if (entity is User)
{
var user = entity as User;
user.ModifiedDate = DateTime.Now;
user.ModifiedBy = 1;
}
...
The root of this issue is that on the breeze server we don’t have any built in change tracking mechanism for changes made on the server. Server entities can be pure poco. The breeze client has a rich change tracking capability for any client side changes but when you get to the server you need to manage this yourself.
The problem occurs because of an optimization we perform on the server so that we only update those properties that are changed. i.e. so that any SQL update statements are only made to the changed columns. Obviously this isn’t a problem for Adds or Deletes or those cases where we update a column that was already updated on the client. But if you update a field on the server that was not updated on the client then breeze doesn't know anything about it.
In theory we could snapshot each entity coming into the server and then iterate over every field on the entity to determine if any changes were made during save interception but we really hate the perf implications especially since this case will rarely occur.
So the suggestion made in another answer here to update the server side OriginalValuesMap is correct and will do exactly what you need.
In addition, as of version 1.1.3, there is an additional EntityInfo.ForceUpdate flag that you can set that will tell breeze to update every column in the specified entity. This isn't quite as performant as the suggestion above, but it is simpler, and the effects will be the same in either case.
Hope this helps.
I had the same problem, and I solved that doing this:
protected override bool BeforeSaveEntity(EntityInfo entityInfo)
{
if(entityInfo.EntityState== EntityState.Modified)
{
var entity = entityInfo.Entity;
entityInfo.OriginalValuesMap.Add("ModificationDate", entity.ModificationDate);
entity.ModificationDate = DateTime.Now;
}
}
I think you can apply this easily to your case.

InSingletonScope using Ninject and a Windows Service

I re-posted this question as I think it is a bit vague. New Post
I am currently using a Windows Service that is on a 2 minute timer. I am using EF code first with a repository pattern for data access. I am using Ninject to inject my dependencies. I have the following bindings in my NinjectDependencyResolver class:
ConnectionStringSettings connectionStringSettings = ConfigurationManager.ConnectionStrings["Database"];
Bind<IDatabaseFactory>().To<DatabaseFactory>()
.InSingletonScope()
.WithConstructorArgument("connectionString", connectionStringSettings.Name);
Bind<IUnitOfWork>().To<UnitOfWork>().InSingletonScope();
Bind<IMyRepository>().To<MyRepository>().InSingletonScope();
When my service runs every 2 minutes I do some thing similar to this:
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
I am starting to see an error in my logs that say:
The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges.
Is it correct to use InSingeltonScope when using Ninject in a Windows Service? I believe I tried using different scopes like InTransientScope but I could only get InSingeltonScope to work with data access. Does the error message have anything to do with Scope or is it unrelated?
Assuming that the service is not the only process that operates on the database you shouldn't use Singleton. What happens in this case is that you are reusing a DBContext that has cached entities which are out of date.
The better way is to treat each timer execution of the service in a similar way like it is a web/wcf request and create a new job processor for the request.
var processor = factory.CreateRowsProcessor();
processor.ProcessRows(rows);
public class RowsProcessor
{
public Processor(UoW uow, ....)
{
...
}
public void ProcessRows(Rows[] rows)
{
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
}
}
Depending of the problem it might even better to put the loop outside and have a new processor for each single row.
Read http://www.planetgeek.ch/2011/12/31/ninject-extensions-factory-introduction/ for more information about factories. Also have a look at the InCallScope of the named scope extension if you need to inject the UoW into multiple classes. http://www.planetgeek.ch/2010/12/08/how-to-use-the-additional-ninject-scopes-of-namedscope/
InSingletonScope will create singleton context = one context for the whole lifetime of your service. It is very bad solution. Because context holds all objects from all previous time events its memory consumption grows and there are possibilities to get errors as the one you are receiving at the moment (but the error really can be unrelated to your singleton context but most likely it is not). The exception says that you have two different objects with the same key identifier tracked by the context - that is not allowed.
Instead of using singleton uow, repository and context use singleton factory and in each time even request new fresh instances from the factory. Dispose context at the end of the time event processing.

CreateDbCommandDefinition fires twice during method PUT through WCF Data Services

We are trying to develop our own EF provider for our legacy APIs. We managed to get "GET/POST" operation working successfully.
However, for operation "PUT/MERGE", the method "CreateDbCommandDefinition" (of DbProviderServices implementation) fires twice. One with "DbQueryCommandTree" and another with "DbUpdateCommandTree".
I understand that it needs to fetch the entity prior to update it (for change tracking I guess). In our case, I don't need the entity information to be fetched prior to update. I simply want to call our legacy APIs with the entity sent for update. How can we strictly ask it to not to do the work of "DbQueryCommandTree" (and do only the work of "DbUpdateCommandTree") when I working with "PUT/MERGE" operations.
The client code looks something like the one below:
public void CustomerUpdateTest()
{
try
{
Ctxt.MergeOption = MergeOption.NoTracking;
var oNewCus = new Customer()
{
MasterCustomerId = "1001",
SubCustomerId = "0",
FirstName = "abc",
LastName = "123"
};
Ctxt.AttachTo("Customers", oNewCus);
Ctxt.UpdateObject(oNewCus);
//Ctxt.SaveChanges();
Ctxt.SaveChanges(SaveChangesOptions.ReplaceOnUpdate);
}
catch (Exception ex)
{
Assert.Fail(ex.Message);
}
You will have to write your own IDataServiceUpdateProvider to make this happen. For EF, the in built EF update provider does 2 queries - one to get the entity which needs to be modified and one for the actual modification. We are planning to make this provider public in our next release, so folks can derive from it and just override one or more methods. But for now, you will have to implement the interface yourself.
For PUT/MERGE requests, WCF Data Services calls IDataServiceUpdateProvider.GetResource to get the entity to update. In your implementation of this method, you can return a token that represents the object that need to get modified (you will have to visit the expression tree that gets passed in this method to find out the entity set and the key value of the entity in question).
In SaveChanges, you can push the update based on the token. That way you can avoid one round trip to the database.
Hope this helps.

how do i implement / build / create an 'in memory database' for my unit test

i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime.
So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database"
But what is that / how do i implement that?
Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :)
At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go.
Michel
Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together.
For the real database we're using the entity framework, and this works very simple.
And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data.
It sounds weird that there is so much work in the fake layer, and so little in the real layer.
I hope this all makes sense :)
EDIT:
so i know i have to create a separate database layer for my unit test, but how do i implement it?
Define an interface for your data access layer and have (at least) two implementations of it:
The real database provider, which will in turn run queries on an SQL database, etc.
An in-memory test provider, which can be prepopulated with test data as part of each unit test.
The advantage of this is that the modules making use of the data provider do not need to whether the database is the real one or the test one, and hence more of the real code will be tested. The test database can be simple (like simple collections of objects) or complex (custom structures with indexes). It can also be a mocked implementation that will assert that it's being called appropriately as part of the test.
Additionally, if you ever need to support another data storage method (or different SQL database), you just need to write another implementation that conforms to the interface, and can be confident that none of the calling code will need to be reworked.
This approach is easiest if you plan for it from (or near) the start, so I'm not sure how easy it will be to apply to your situation.
What it might look like
If you're just loading and saving objects by id, then you can have an interface and implementations like (in Java-esque pseudo-code; I don't know much about asp.net):
interface WidgetDatabase {
Widget loadWidget(int id);
saveWidget(Widget w);
deleteWidget(int id);
}
class SqlWidgetDatabase extends WidgetDatabase {
Connection conn;
// connect to database server of choice
SqlWidgetDatabase(String connectionString) { conn = new Connection(connectionString); }
Widget loadWidget(int id) {
conn.executeQuery("SELECT * FROM widgets WHERE id = " + id);
Widget w = conn.fetchOne();
return w;
}
// more methods that run simple sql queries...
}
class MemeoryWidgetDatabase extends WidgetDatabase {
Set widgets;
MemoryWidgetDatabase() { widgets = new Set(); }
Widget loadWidget(int id) {
for (Widget w: widgets)
if (w.getId() == id)
return w;
return null;
}
// more methods that find/add/delete a widget in the "widgets" set...
}
If you need to run more other queries (such as batch selects based on more complex criteria), you can add methods to do this to the interface.
Likewise for complex updates. Transaction support is possible for the real database implementation. I'm not sure how easy it is to build an in-memory db that is capable of providing proper transaction support. To test it you'd need "open" several "connections" to the same data set, and to only apply updates to that shared dataset when a transaction is committed.
i used Sqlite for unit test as fake DB
Why don't you use a mocking framework (like moq or rhino mocks)? If you access your data through an interface, you can mock that interface and specify whatever you want to return on every test. Other approach is to have a separate environment for testing purposes, with a "real" database, where you make tests before taking your code for the production environment.
Uhhhh...... If you're storing all your test data in XML files. You've just changed one database for another. That is not an in memory database. In PHP you would use something like this.
class MemoryProductDB {
private $products;
function MemoryProductDB() {
$this->products = array();
}
public function find($index) {
return $this->products[$index];
}
public function save($product) {
$this->products[$product['index']] = $product;
}
}
You notice that all my data is stored in a memory array and is retrieved from a memory array. This is a simple In Memory Database.
IMHO, if you're using XML to store test data then you really haven't disconnected the dependencies from the model and the database effectively. No matter how complex your business rules are, when they touch the database, all they really are doing is CRUD (create, retrieve, update, and delete) functionality.
If you what your dealing with in the model is multiple objects from the database then maybe you need to compose all those objects into a single object and have the model use that one object. An example would be an order composed of products. Don't be retrieving products then saving products. Retrieve orders then save orders and have your model work on orders. The model shouldn't know anything about products.
This is called granularity of abstraction.
[Edit]
There was a very good question in the comments. When testing with an In Memory Database we don't care about how the select works in a database. The controller, first off, has to have functionality on the database to count the number of possible records that could be accessed for paging. The IMDb (in memory database) should just send a number. The controller should never care what that number is. Same with the actual records. Hopefully all your controller is doing is displaying what it gets back from the IMDb.
[EDit]
You should never be unit testing your controllers with a live model and imdb. The setup code for the imdb will have a lot of friction. Instead when unit testing a controller, you need to unit test a mock, stub, fake model. The best use of an imdb is during an integration test or when unit testing a model. Isn't an imdb a fake?
My scenario is:
In my client I use a plug in for a table. DataTables. Server side processing.
Client GET requests items in table product.get(5,10). The return data will be encoded JSON.
The model will be responsible for forming the JSON from retrieving information from the gateway to the database. The gateway is just a facade over the database. I'm a mocker so my gateway is a mock not an in memory gateway.
public function testSkuTable() {
$skus = array(
array('id' => '1', 'data' => 'data1'),
array('id' => '2', 'data' => 'data2'),
array('id' => '3', 'data' => 'data3'));
$names = array(
'id',
'data');
$start_row = $this->parameters['start_row'];
$num_rows = $this->parameters['num_rows'];
$sort_col = $this->parameters['sort_col'];
$search = $this->parameters['search'];
$requestSequence = $this->parameters['request_sequence'];
$direction = $this->parameters['dir'];
$filterTotals = 1;
$totalRecords = 1;
$this->gateway->expects($this->once())
->method('names')
->with($this->vendor)
->will($this->returnValue($names));
$this->gateway->expects($this->once())
->method('skus')
->with($this->vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction)
->will($this->returnValue($skus));
$this->gateway->expects($this->once())
->method('filterTotals')
->will($this->returnValue($filterTotals));
$this->gateway->expects($this->once())
->method('totalRecords')
->with($this->vendor)
->will($this->returnValue($totalRecords));
$expectJson = '{"sEcho": '.$requestSequence.', "iTotalRecords": '.$totalRecords.', "iTotalDisplayRecords": '.$filterTotals.', "aaData": [ ["1","data1"],["2","data2"],["3","data3"]] }';
$actualJson = $this->skusModel->skuTable($this->vendor, $this->parameters);
$this->assertEquals($expectJson, $actualJson);
}
You will notice that with this unit test that I'm not concerned what the data looks like. $skus doesn't even look anything like that actual table schema. Just that I return records. Here is the actual code for the model:
public function skuTable($vendor, $parameterList) {
$startRow = $parameterList['start_row'];
$numRows = $parameterList['num_rows'];
$sortCols = $parameterList['sort_col'];
$search = $parameterList['search'];
if($search == null) {
$search = "";
}
$requestSequence = $parameterList['request_sequence'];
$direction = $parameterList['dir'];
$names = $this->propertyNames($vendor);
$skus = $this->skusList($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$filterTotals = $this->filterTotals($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$totalRecords = $this->totalRecords($vendor);
return $this->buildJson($requestSequence, $totalRecords, $filterTotals, $skus, $names);
}
The first part of the method breaks the individual parameters from the $parameterList that I get from the get request. The rest are calls to the gateway. Here is one of the methods:
public function skusList($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction) {
return $this->skusGateway->skus($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction);
}
I've been using in memory Sqlite for my unit tests, its really usefull

Resources