Synchronizing my records between two separate databases - asp.net-mvc

I am building a BPM based on asp.net MVC, I am working on two systems:-
A third party BPM.
My own BPM system.
Currently when I am adding a new process I am doing the following:-
Create new process at the third party application using its REST API.
Create a new process at my own BPM database.
But I am facing the following problems:-
How I can add/edit/delete the records from the two systems is a consistence manner, so if the record was not added in the third party system I have to remove it from my system, and visa versa.
My Process model class is:-
public class newprocess
{
public string name { get; set; }
public string activityId { get; set; }
public string Status {get; set;}
}
My action method is:-
[HttpPost]
public ActionResult CreateProcess(string name) {
using (var client = new WebClient())
{
try
{
repository.CreateProcess(name,"Pending");
repository.save();
var query = HttpUtility.ParseQueryString(string.Empty);
query["j_username"] = "kermit";
query["hash"] = "9449B5ABCFA9AFDA36B801351ED3DF66";
query["loginAs"] = User.Identity.Name;
var url = new UriBuilder("http://localhost:8080/jw/web/json/Process/create/" + name.ToString() );
url.Query = query.ToString();
string json = client.DownloadString(url.ToString());
var serializer = new JavaScriptSerializer();
var myObject = serializer.Deserialize<newprocess>(json);
string activityid = myObject.activityId;
if (activityid != null)
{
repository.UpdateProcess(name, "Finish");
repository.save();
}
So what I am doing inside my POST action method, is :-
Create a new record at my database with a status of “pending”.
Calling the third Party API, and get the result.
If the ActivityID is not null (the create successes in the third party system), I am updating my record status to be “finish”. Else the status will stay pending.
I have built a screen which display all the records with the status “pending” , and the admin will be able to delete them from my own database.
So will my approach work well , or it will create problems I am unaware of . Or should I be looking for a completely different approach
thanks in advance for any help.

The direction looks ok. But remember to complete the cycle and consider a few more options
Based on your statement "what I am doing"
1 Create a new record at my database with a status of “pending”.
2 Calling the third Party API, and get the result.
3 If the ActivityID is not null (the create successes in the third party system), I am updating my record status to be “finish”. Else the status will stay pending.
4 I have built a screen which display all the records with the status “pending” , and the admin will be able to delete them from my own database.
You have covered the main concept of 2 staged commit. And If all goes well this will be fine.
But you should also consider.
Investigate if only from a theory point of view "reliable messaging".
May be overkill here.
What if you dont receive a reply. You cant assume it wasnt posted.
The return traffic may get lost post commit on the other side.
So you should follow up with check exists calls or manually tidy up.You actually need to posting your sides entry rather than deleting it everytime there is not response. Delete of course is the most likely. Of course im not talking about your side receives the message NOT posted. That is a clear known state.
What happens if your pending to finished change commit fails?
How do you recover this situation.
Delete the otherside entry? or retry yourside.
You should also consider what the basic pattern/plan is when the other side is not reachable at all. Accept the posts, record many as pending and have a process that retries the pending records later. Or just fail all new calls until the other party is reachable.
At least think about the non perfect world scenarios and have a plan.
That is the basic pattern. And doing some of it manually is ok. It is a plan and is a valid pattern.
Of course you can add tools, and logic to help support this.
eg error handling, automated retry patterns. Asynchronous acknowledgements etc.
But that is taking it to enterprise level. At an enterprise cost.
Basically If you take the stance ONE system is responsible for the overall integrity and ongoing synchronization. That is the best place to start. You have that. Your system is the Orchestrator and responsible for synchronization outcomes.

Related

Discard an already saved entity

I have a distributed system where users can make changes into one single database. To illustrate the problem, let's assume we have the following entities:
public class Product{
public int Id{get;set;}
public List<ProductOwner> ProductOwners{get;set;}
}
public class ProductOwner{
public int ProductId { get; set; }
[ForeignKey("ProductId")]
[Inversroperty("ProductOwners")]
public Product Product{ get; set; }
public int OwnerId { get; set; }
[ForeignKey("OwnerId")]
public Owner Owner{ get; set; }
}
public class Owner{
public int Id{get;set;}
}
Let's also assume we have two users, UserOne and UserTwo connected to the system.
UserOne adds Product1 and assigns Owner1 as an owner. As a result, a new ProductOwner1 is created with key=[Product1.Id, Owner1.Id]
UserTwo does the same operation, another instance ProductOwner2 with key=[Product1.Id, Owner1.Id] is created. This will result in an EF exception on the server side, which is expected, as a row with key=[Product1.Id, Owner1.Id] already exists in the database.
Question
The issue above can be partly resolved by having some sort of real time data refresh on both UserOne and UserTwo machines (I am already doing this) and running a validation task on the server to ignore and not save entities that are already in the DB.
The remaining issue is how to tell Breeze on 'userTwo' machine to mark ProductOwner2 as saved and change its state from Added to Unchanged?
I think this is an excellent question and has been raised enough that I wanted to chime in on how I would do it given the above scenario in hopes others can find a good way to accomplish this from a Breeze.js perspective as well. This answer doesn't really address server logic so it is incomplete at best.
Step 1 - Open a web socket
First and foremost we need some way to tell the other connected clients that there has been a change. SignalR is a great way to do this if you are using the ASP.NET MVC stack and there are a bunch of other tools.
The point is that we don't need to have a great way of passing data down and forcing it in to the client's cache, we just need a lightweight way to tell the client that some information has changed and if they are concerned with this to refresh something. My recommendation in this area would be to use a payload that tells the client either what entity type and Id changed or give a resource to the client to let them know what collection of entities to refresh. Two examples of a JSON payload that would work well here -
{
"entityChanges": [
{
"id": "123",
"type": "product",
"new": false
},
{
"id": "234",
"type": "product",
"new": true
}
],
collectionChanges: [
{
"type": "productOwners"
}
]
}
In this scenario we are simply telling the client that the products with Ids of 123 and 234 have changed, and that 234 happens to be a new entity. We aren't pushing any data about what properties have changed to the client as that is their responsibility to decide whether to refresh or requery for data. There is also the possibility of telling the client to refresh a whole collection like in the second array but I will focus on the first example.
Step 2 - Handle the changes
Ok we got a payload from our web socket that we need to pass to some analyzer to decide whether to requery. My recommendation here is to check if that entity exists in cache, and if so, refresh it. If a flag comes down in the JSON that says it is a new entity we probably also need to requery it. Here is some basic logic -
function checkForChanges (payload) {
var parsedJson = $.parse(payload);
$.each(parsedJson.entityChanges, function (index, item) {
// If it is a new entity,
if (item.new === true) {
// Go get it from the database
manager.fetchEntityByKey(item.type, item.id)
.then(fetchSucceeded).fail(fetchFailed);
} else {
// Check local cache first
var localentity = manager.getEntityByKey(item.type, item.id);
// And if we have a local copy already,
if (localentity) {
// Go refresh it from the database
manager.fetchEntityByKey(item.type, item.id)
.then(fetchSucceeded).fail(fetchFailed);
}
}
}
}
Now there is probably some additional logic in your application that need to be handled but in a nut shell we are -
Opening up a lightweight connection to the client to listen for changes only
Creating a handler for when those changes occur
Applying some logic on how to query for or refresh the data
Some considerations here are you may want to use different merge strategies depending on various conditions. For instance if the entity already has changes you may want to preserve changes, where as if it is a entity that is always in a state of flux you may want to overwrite changes.
http://www.breezejs.com/sites/all/apidocs/classes/MergeStrategy.html
Hope this provides some insight, and if it doesn't answer your question directly I apologize for crowding up the answers : )
Would it be possible to catch the entity framework / unique key constraint error on the breeze client and react by creating a new entity manager (using the createEmptyCopy method), loading the relevant ProductOwner records and using them to determine which ProductOwner records in the original entityManager need to be set "unchanged" using the entity's entityAspect's setUnchanged method. Once this "synchronization" is done the save changes can be retried.
In other words, the client is optimistic the save will succeed but can recover if necessary. The server remains oblivious to the potential race condition and has no custom code.
A brute force approach, apologies if I'm stating the obvious.

Command > Rich Model > Event pattern in MVC

I am creating an ASP.NET MVC app attempting to avoid the Fat Controller smell. I am doing this by making controller methods simply send lightweight commands to a command bus, which then get picked up by command handlers. The command handlers enact the commands on the domain model, which in turn creates state-change events that are persisted.
I am doing this to try and get away from the CRUD model of "get X from repository, change it and put it back", remove all domain-specific knowledge from the web application and to allow the intent of the user to be communicated directly to the domain model.
So, let's say a Contact aggregate is composed as follows (I have omitted all but one of the setter methods for brevity).
public class Contact {
private Address _homeAddress;
public Address HomeAddress {
get { return _homeAddress; }
set {
if(newHomeAddress.Equals(_homeAddress)) return;
_homeAddress = newHomeAddress;
AddEvent(new HomeAddressChanged(Id, _homeAddress));
}
}
public Address WorkAddress { get; set; }
public PhoneNumber PhoneNumber { get; set; }
public EmailAddress EmailAddress { get; set; }
}
The command handler that enacts a change of HomeAddress would look like so.
public class ChangeHomeAddressCommandHandler : IHandleCommand<ChangeHomeAddressCommand>
{
private IRepository<Contact> _repo;
public ChangeHomeAddressCommandHandler(IRepository<Contact> repo)
{
_repo = repo;
}
public void Execute(ChangeHomeAddressCommand command)
{
var toEdit = _repo.One(command.Id);
toEdit.HomeAddress = command.NewHomeAddress;
_repo.CommitChanges(toEdit);
}
}
My trouble is that the form that the user submits needs to allow editing of a WHOLE CONTACT i.e. all of its associated addresses, phone numbers &c) which means that there needs to be a command and a handler for each and every property state change.
Each one of these handlers needs to load the aggregate, make the changes and then commit the changes. So even if you don't change all the properties, the command handler still has to load and build the Contact aggregate four times, which is unnecessarily expensive.
I have considered some options...
A "macro" command (called maybe EditContactCommand) into which instances of each possible sub-command (i.e. the individual ChangeHomeAddressCommand) can be added. The macro command loads the aggregate and passes it through the sub-commands and commits changes on dispose.
Making the UI more "task focussed". Instead of the Edit page being a structured collection of textboxes to gather input, use labels accompanied by a "Change" button which invokes a modal dialog. When the modal dialog is OK'd, make an AJAX post back to the controller which in turns buses a command. Or indeed, build smaller pages which only expose certain facets of the Contact aggregate. You only ever change what has actually changed, and changes can happen without a big "Save"-style commit. (I'm not sure whether the users would wear this because they seem to like their sea of textboxes!)
I'd be grateful for any advice, experience and wisdom. Thanks.
The problem might be that you're trying hard to un-CRUDify an application that is (as far as we can tell from that little code) very CRUDish in nature.
No matter how you try to bend your commands to make them look less like CRUD, they won't make any sense if they don't describe a domain reality -- it only adds more unnecessary complexity. Changing an email address might be a command of its own right if it triggers a whole process of re-sending a validation email and so on, but not if it just modifies the email field.
I think there's nothing wrong with commands that modify an entire entity, as long as they are valid domain operations/events explored with your domain expert and there's not 100% of them. Applications are rarely purely CRUD, but when they are, DDD is certainly not the best approach to choose.
You might be already painting yourself in a corner. I'm missing the user's intent. Why is the home address being changed? Did the user make a typo or did the contact really move? If it's the latter, you might need to send an email - if it's the former, probably not.
Let scenario's drive you to discovering the user's intent.

How to delete stale data asp.net mvc code first?

I have developed an asp.net mvc application using entityFramewrok, code first in my app I have a class which maps to a table with the following property
public class Comments
{
public int Id{ get; set; }
public string Comment { get; set; }
public DateTime LastEdit{ get; set; }
}
I want my app be able to delete(remove) comments which are older that 40 days automatically.
How can I achieve that?
This has nothing to do with the class you posted or with ASP.NET, or MVC, or even Entity Framework really. This is just about scheduling a task to run each day, which will identify data and delete it.
Essentially you have two primary options...
1) Create a Windows Service. This would include a Timer which is set to execute every 24 hours (or any other interval you see fit). The action invoked by that Timer would connect to the database, identify the records to be deleted, and delete them.
2) Create a Console Application. This wouldn't internally run any kind of schedule, but would just perform a one-time action of connecting to the database, identifying the records to be deleted, and deleting them. This application would be scheduled to run periodically (again, every day sounds reasonable) using the host system's task scheduler.
It would make sense to use the same Entity Framework code that the web application uses, so you would want to make sure that code is in its own Class Library project and then both the Web Application and the Windows Service would reference that project.
Conversely, if you need to keep this local to the web application itself, then the application would need to do this in response to a request of some kind. That would hurt performance of the application, but it's still possible. Any time a user requests a given page you can first perform those deletes and then return the requested view. Again, this is not ideal because it means doing this many times a day and interrupting the user experience (even for just a moment). It's best to offload background data maintenance to an offline application, such as a Windows Service or a Console Application.
1. You can create trigger in database for this.
2. As David say, using Windows Service or Console Application create a applications that connects to database and do something like this:
public void DeleteOldComments()
{
var monthAgo = DateTime.Now.AddMonths(-1);
var oldComments = Db.CommentsTable.Where(e => e.DateTime <= monthAgo);
foreach (var item in oldComments )
{
var e = Db.CommentsTable.Find(item.Id);
if (e != null)
{
Db.CommentsTable.Remove(e);
}
}
}
3. Long and bad way: Create settings table, and save LastCommentDeleteDate there, in your site layout, using javascript call any action that fire DeleteOldComments() one time in a day. (using cookie). In any request check LastCommentDeleteDate, if it allowed, call delete function.

ASP.NET MVC - State and Architecture

After a pair programming session, an interesting question came up which I think I know the answer for.
Question: Is there any other desired way in ASP.NET MVC to retain 'state' other than writing to database or a text file?
I'm going to define state here to mean that we have a collection of person objects, we create a new one, and go to another page, and expect to see the newly created person. (so no Ajax)
My thoughts are we don't want any kung-fu ViewState or other mechanisms, this framework is about going back to a stateless web.
What about user session? There are plenty of valid use cases to store things in session. And what about a distributed caching system like memcached? You also seem to leave out the query string - which is an excellent state saver (?page=2). To me those seem like other desirable methods to save state across requests...?
My thoughts are we don't want any kung-fu ViewState or other mechanisms, this framework is about going back to a stateless web.
The example you provided is pretty easy to do without any sort of "view state kung fu" using capabilities that are already in MVC. "User adds a person and sees that on the next screen." Let me code up a simple PersonController that does exactly what you want:
public ActionResult Add()
{
return View(new Person());
}
[HttpPost]
public ActionResult Add(PersonViewModel myNewPersonViewModel)
{
//validate, user entered everything correctly
if(!ModelState.IsValid)
return View();
//map model to my database/entity/domain object
var myNewPerson = new Person()
{
FirstName = myNewPersonViewModel.FirstName,
LastName = myNewPersonViewModel.LastName
}
// 1. maintains person state, sends the user to the next view in the chain
// using same action
if(MyDataLayer.Save(myNewPerson))
{
var persons = MyDataLayer.GetPersons();
persons.Add(myNewPersion);
return View("PersonGrid", persons);
}
//2. pass along the unique id of person to a different action or controller
//yes, another database call, but probably not a big deal
if(MyDataLayer.Save(myNewPerson))
return RedirecToAction("PersonGrid", ...etc pass the int as route value);
return View("PersonSaveError", myNewPersonViewModel);
}
Now, what I'm sensing is that you want person on yet another page after PersonSaveSuccess or something else. In that case, you probably want to use TempData[""] which is a single serving session and only saves state from one request to another or manage the traditional Session[""] yourself somehow.
What is confusing to me is you're probably going to the db to get all your persons anyway. If you save a person it should be in your persons collection in the next call to your GetPersons(). If you're not using Ajax, than what state are you trying to persist?
ASP.NET MVC offers a cleaner way of working with session storage using model binding. You can write a custom model binder that can supply instances from session to your action methods. Look it up.

Best practices when limiting changes to specific fields with LINQ2SQL

I was reading Steven Sanderson's book Pro ASP.NET MVC Framework and he suggests using a repository pattern:
public interface IProductsRepository
{
IQueryable<Product> Products { get; }
void SaveProduct(Product product);
}
He accesses the products repository directly from his Controllers, but since I will have both a web page and web service, I wanted to have add a "Service Layer" that would be called by the Controllers and the web services:
public class ProductService
{
private IProductsRepository productsRepsitory;
public ProductService(IProductsRepository productsRepository)
{
this.productsRepsitory = productsRepository;
}
public Product GetProductById(int id)
{
return (from p in productsRepsitory.Products
where p.ProductID == id
select p).First();
}
// more methods
}
This seems all fine, but my problem is that I can't use his SaveProduct(Product product) because:
1) I want to only allow certain fields to be changed in the Product table
2) I want to keep an audit log of each change made to each field of the Product table, so I would have to have methods for each field that I allow to be updated.
My initial plan was to have a method in ProductService like this:
public void ChangeProductName(Product product, string newProductName);
Which then calls IProductsRepository.SaveProduct(Product)
But there are a few problems I see with this:
1) Isn't it not very "OO" to pass in the Product object like this? However, I can't see how this code could go in the Product class since it should just be a dumb data object. I could see adding validation to a partial class, but not this.
2) How do I ensure that no one changed any other fields other than Product before I persist the change?
I'm basically torn because I can't put the auditing/update code in Product and the ProductService class' update methods just seem unnatural (However, GetProductById seems perfectly natural to me).
I think I'd still have these problems even if I didn't have the auditing requirement. Either way I want to limit what fields can be changed in one class rather than duplicating the logic in both the web site and the web services.
Is my design pattern just bad in the first place or can I somehow make this work in a clean way?
Any insight would be greatly appreciated.
I split the repository into two interfaces, one for reading and one for writing.
The reading implements IDisposeable, and reuses the same data-context for its lifetime. It returns the entity objects produced by linq to SQL. For example, it might look like:
interface Reader : IDisposeable
{
IQueryable<Product> Products;
IQueryable<Order> Orders;
IQueryable<Customer> Customers;
}
The iQueryable is important so I get the delayed evaluation goodness of linq2sql. This is easy to implement with a DataContext, and easy enough to fake. Note that when I use this interface I never use the autogenerated fields for related rows (ie, no fair using order.Products directly, calls must join on the appropriate ID columns). This is a limitation I don't mind living with considering how much easier it makes faking read repository for unit tests.
The writing one uses a separate datacontext per write operation, so it does not implement IDisposeable. It does NOT take entity objects as input or out- it takes the specific fields needed for each write operation.
When I write test code, I can substitute the readable interface with a fake implementation that uses a bunch of List<>s which I populate manually. I use mocks for the write interface. This has worked like a charm so far.
Don't get in a habit of passing the entity objects around, they're bound to the datacontext's lifetime and it leads to unfortunate coupling between your repository and its clients.
To address your need for the auditing/logging of changes, just today I put the finishing touches on a system I'll suggest for your consideration. The idea is to serialize (easily done if you are using LTS entity objects and through the magic of the DataContractSerializer) the "before" and "after" state of your object, then save these to a logging table.
My logging table has columns for the date, username, a foreign key to the affected entity, and title/quick summary of the action, such as "Product was updated". There is also a single column for storing the change itself, which is a general-purpose field for storing a mini-XML representation of the "before and after" state. For example, here's what I'm logging:
<ProductUpdated>
<Deleted><Product ... /></Deleted>
<Inserted><Product ... /></Inserted>
</ProductUpdated>
Here is the general purpose "serializer" I used:
public string SerializeObject(object obj)
{
// See http://msdn.microsoft.com/en-us/library/bb546184.aspx :
Type t = obj.GetType();
DataContractSerializer dcs = new DataContractSerializer(t);
StringBuilder sb = new StringBuilder();
XmlWriterSettings settings = new XmlWriterSettings();
settings.OmitXmlDeclaration = true;
XmlWriter writer = XmlWriter.Create(sb, settings);
dcs.WriteObject(writer, obj);
writer.Close();
string xml = sb.ToString();
return xml;
}
Then, when updating (can also be used for logging inserts/deletes), grab the state before you do your model-binding, then again afterwards. Shove into an XML wrapper and log it! (or I suppose you could use two columns in your logging table for these, although my XML approach allows me to attach any other information that might be helpful).
Furthermore, if you want to only allow certain fields to be updated, you'll be able to do this with either a "whitelist/blacklist" in your controller's action method, or you could create a "ViewModel" to hand in to your controller, which could have the restrictions placed upon it that you desire. You could also look into the many partial methods and hooks that your LTS entity classes should have on them, which would allow you to detect changes to fields that you don't want.
Good luck! -Mike
Update:
For kicks, here is how I deserialize an entity (as I mentioned in my comment), for viewing its state at some later point in history: (After I've extracted it from the log entry's wrapper)
public Account DeserializeAccount(string xmlString)
{
MemoryStream s = new MemoryStream(Encoding.Unicode.GetBytes(xmlString));
DataContractSerializer dcs = new DataContractSerializer(typeof(Product));
Product product = (Product)dcs.ReadObject(s);
return product;
}
I would also recommend reading Chapter 13, "LINQ in every layer" in the book "LINQ in Action". It pretty much addresses exactly what I've been struggling with -- how to work LINQ into a 3-tier design. I'm leaning towards not using LINQ at all now after reading that chapter.

Resources