Can TransactionTemplate be used with multiple data sources in Grails? - grails

In our Grails application, we use one database per tenant. Each database contains data of all domain objects. We switch the databases based on the request context using pattern like DomainClass.getDS().find().
The transactions do not work out of the box since the request does not know what transaction manager to use. The #Transactional and withTransaction do nothing.
I have implemented my own version of withTransaction():
public static def withTransaction(Closure callable) {
new TransactionTemplate(getTM()).execute(callable as TransactionCallback)
}
The getTM() returns transaction manager based of the request context, for example transactionManager_db0.
This seems to work in my sandbox. Will this also work:
For parallel requests?
If the database has several replicas?
For hierarchical transactions?
I believe in TDD, but with the exception of the last bullet, I have hard times to provide tests to verify the code.

Related

How can I access lookup data via an injected service from within an entity?

My application is using DDD with .NET Core and EF Core. I have some business rules that run within an entity that need to check dates against a cached list of company holiday dates. The company holidays are loaded from the db and cached by an application service that is configured with our DI container so it can be injected into our controllers, etc.
I cannot determine how, or if it's the right/best approach, to get the service injected into the entity so it can grab those dates when running business rules. I did find this answer that appears to demonstrate one way to do it, but I wanted to see if there were any additional options because that way has a bit of a code-smell to me upon first glance (adding a property to the DbContext to grab off the private constructor injected context).
Are there any other ways to accomplish something like this?
ORM classes are very rarely your domain objects. If you can start with your domain and seamlessly map to an ORM without the need for infrastructure specific alterations or attributes then that is fine; else you need to split your domain objects from your ORM objects.
You should not inject any services or repositories into aggregates. Aggregates should focus on the command/transactional side of the solution and work with pre-loaded state and should avoid requesting additional state through any handed mechanisms. The state should be obtained and handed to the aggregate.
In your specific scenario I would suggest loading your BusinessCalendar and then hand it to your aggregate when performing some function, e.g.:
public class TheAggregate
{
public bool AttemptRegistration(BusinessCalendar calendar)
{
if (!calendar.IsWorkingDay(DateTime.Now))
{
return false;
}
// ... registration code
return true;
}
// or perhaps...
public void Register(DateTime registrationDate, BusinessCalendar calendar)
{
if (!calendar.IsWorkingDay(registrationDate))
{
throw new InvalidOperationException();
}
// ... registration code
}
}
Another take on this is to have your domain ignore this bit and place the burden on the calling code. In this way if you ask you domain to do something it will do so since, perhaps, a registration on a non-working day (in my trivial example) may be performed in some circumstances. In these cases the application layer is responsible for checking the calendar for "normal" registration or overriding the default behaviour in some circumstances. This is the same approach one would take for authorisation. The application layer is responsible for authorisation and the domain should not care about that. If you can call the domain code then you have been authorised to do so.

MVC - Best pattern (UoW + Repositories + Services + DI)

I've been doing mvc for some time, but it's my first contact with DI.
I started a new project with Ninject which seems pretty simple and easy to understand, however almost every tutorial I saw has UoW, Repositories and Services.
What I understand is that:
Repositories - Abstract Layer for interactions with EF / MongoDB / XML / Whatever may be a database (CRUD Operations)
UoW - Set of operations that correlate together, it may use N repositories to perform tasks that will be used in Controllers
Services I don't really get the point of this, it seems just one more step as it uses multiple UoW's to perform "more tasks"? I'm lost in this one.
Ok, it took me some time to "eat" the Repository thing since i prefer to pass the EF Context trough the UoW.
Is it ok if i forget the Repository and just use the context? Or is it used for any Unit Test task?
What's the Service's usage?
Since I may perform every actions/tasks inside UoW and then call it inside the controllers.
Is there any better set of patterns to use?
Since these are very common and you might be talking about either, I'm going to give a brief explanation about each.
Domain Services: When you have an entity and you start to push logic into it, you might get to a point where part of the logic doesn't really belong to that entity, so you create a Domain Service to abstract this logic away. An example would be:
public class Shipment
{
...
public void CalculateFee(IFeeCalculatorService feeCalculatorService)
{
... Any additional and entity relevant logic for fee calculation can be here as well.
this.Fee = feeCalculatorService.Calculate();
}
...
}
Application Services: These are the services that you will be calling from your controllers to encapsulate the operations needed for a specific task. Let's say you have a controller to receive a friendship request approval or rejection. Your Application Service should receive enough data to be able to:
Find the friendship request domain entity
Call its approve or reject method
Call the methods to persist that change back to the database
Return relevant information to the controller
Infrastructure Services: These services will abstract the logic that is not related to the business, but related to how the application works. An example would be a service to validate security tokens received on your requests, or to perform logging activities.
EFs DBContext already implements both the UoW and Repository patterns, so yiu have no benefit of implementing those again in yiur own code.
Services are a way to abstract business logic so it can be reused

Breeze and RESTful WebAPI

Question:
What value does breeze provide when I need to implement my own POST/PUT/GET endpoints per entity in WebAPI?
Background:
This seems to be a common implementation of a serverside Breeze controller:
[BreezeController]
public class TodosController : ApiController {
readonly EFContextProvider<TodosContext> _contextProvider =
new EFContextProvider<TodosContext>();
// ~/breeze/todos/Metadata
[HttpGet]
public string Metadata() {
return _contextProvider.Metadata();
}
// ~/breeze/todos/Todos
// ~/breeze/todos/Todos?$filter=IsArchived eq false&$orderby=CreatedAt
[HttpGet]
public IQueryable<TodoItem> Todos() {
return _contextProvider.Context.Todos;
}
// ~/breeze/todos/SaveChanges
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle) {
return _contextProvider.SaveChanges(saveBundle);
}
// other miscellaneous actions of no interest to us here
}
I'm in the middle of building a RESTish API that, up to this point, has endpoints like:
GET /api/todo/1
PUT /api/todo
POST /api/todo
It seems like Breeze requires the endpoints to be much simpler (for better or worse) - just a bunch of GETS and a SaveChanges POST endpoint.
This leads me to think that Breeze makes rapid development with a single web client, well, a breeze... but as soon as you have anonymous clients, you have to force them into whatever breeze interface conventions you've created in your client, which seems to defeat the purpose of RESTful API design. Is this the case?
Breeze is, first and formost, a client-side JavaScript framework. If you're not using Breeze on the client, the benefits of Breeze.WebApi are limited to
Enhanced OData query support ($select and $expand support, extended $orderby)
Save interception points (beforeSaveEntity and beforeSaveEntities events)
Save result handling (updated entity keys, concurrency columns)
Metadata extraction and serialization
As you've surmised, Breeze has a different philosophy from REST regarding CRUD operations.
Breeze is designed for clients who may want to C/U/D many resources, of different types, in a single transaction. This allows users to manipulate the data in complex ways without hitting the server, then saving their changes when they are ready. For example, one could create a new Order, move two OrderLineItems from one Order to another, delete a third OrderLineItem, modify the quantity on a fourth, and then SaveChanges(). Breeze even supports using localStorage to work completely disconnected from the server. Once reconnected, the changes can all be saved.
REST was designed to operate on one resource at a time. Each C/U/D operation must be performed against the server immediately so that the response code can be acted upon. It works well for applications with simple update needs, but not for data-entry applications. While transactions can be supported in REST, they are cumbersome at best.
Having said that, your server-side Breeze API is not limited to what you see in the Todos example. Breeze supports Named Saves, which allows you to have different endpoints for different operations. You can also use Save Interception to ensure that your save bundle only contains the types that it should. And naturally, there's nothing preventing you from exposing both APIs on your server, and having both fed by the same persistence layer.
If you have to decide between them, you should start with your users. Real users (not developers) don't care about REST, they care about what the application can do. Ultimately, REST gives your application all the semantics of HTTP, and Breeze gives it all the semantics of a relational or object database. Which one to expose to your users should depend upon the use cases you need to support.

ASP Mvc Nhibernate Issue

I am experiencing some bizarre problems with Nhibernate within my MVC web application.
There is not 1 consistent error, I keep getting loads of random ones:
Transaction not successfully started
New request is not allowed to start because it should come with valid transaction descriptor
Unexpected row count: -1; expected: 1
To give a little context to the setup, I am using Ninject to DI the sessions and other Nhibernate related objects, currently I am using RequestScope however I have tried SingletonScope. I have a large and complicated data model, which is read out as a whole, but persisted back in separate parts, as these can all be edited and saved individually.
An example would be having a Customer object, which contains a address object, a contact object, friends object, previous orders object etc etc...
So the whole object is read out, then mapped to the UI domain models and then displayed in different partials within the page. Each partial can be updated individually via ajax, so you may update 1 section or you could update them all together. It seems mainly to give me the problems when I try to persist them all together (so 2-4 simultanious ajax requests to persist chunks of the model).
Now I have integration tests that work fine, which just test the persistence and retrieval of entities. As a whole and individually and all pass fine, however in the web app they just seem to keep throwing random exceptions, and originally refused to persist outside of the Nhibernate cache. I found a way round this by wrapping most units of work within transactions, which got the data persisting but started adding new errors to the mix.
Originally I was thinking of just scrapping Nhibernate from the project, as although I really want its persistance/caching layer, it just didnt seem to be flexible enough for my domain, which seems odd as I have used it before without much problem, although it doesn't like 1-1 mappings.
So has anyone else had flakey transaction/nhibernate issues like this within an ASP MVC app... I know this may be a bit vague as the errors dont point to one thing, and it doesn't always error, so its like stabbing in the dark, but I am out of ideas so any help would be great!
-- Update --
I cannot post all relevant code as the project is huge, but the transaction bit looks like:
using (var transaction = sessionManager.Session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
try
{
// Do unit of work
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
throw;
}
}
Some of the main problems I have had on this project have stemmed from:
There are some 1-1 relationships with composite keys, but logically it makes sense
The Nhibernate domain entities go through a mapping layer to become the UI domain entities, then vice versa when saving. Problem here is that with the 1-1 mappings, when persisting the example Address I have to make a Surrogate Customer object with the correct Id then merge.
There is ALOT of Ajax that deals with chunks of the overall model (I talk like there is one single model, but there are quite a few top level models, just one that is most important)
Some notes that may help. I use windsor but imagine the concepts are the same. Sounds like there may be a combination of things.
SessionFactory should be created as singleton and session should be per web request. Something like:
Bind<ISessionFactory>()
.ToProvider<SessionFactoryBuilder>()
.InSingletonScope();
Bind<ISession>()
.ToMethod( context => context.Kernel.Get<ISessionFactory>().OpenSession() )
.InRequestScope();
Be careful of keeping transactions open for too long, keep them as short lived as possible to avoid deadlocks.
Check your queries are running as as expected by using a tool like NHProf. Often people load up too much of the graph which impacts performance and can create deadlocks.
Check your mappings for things like not.lazyload() and see if you actually need the additional data in the queries and keep results returned to a min. Check your queries execution plans and ensure adequate indexes are in place.
I have had issues with mvc3 action filters being cached, which meant transactions were not always started, but would attempt to be closed causing issues. Moved all my transaction commits into ActionResults in the controllers to keep transaction as short as possible and close to the action.
Check your cascades in your mappings and keep the updates to a minimum.

Where / How to fit Solr into ASP.net MVC app (using nHibernate / Repository Pattern)

I'm currently in the middle of a reasonably large question / answer based application (kind of like stackoverflow / answerbag.com)
We're using SQL (Azure) and nHibernate for data access and MVC for the UI app.
So far, the schema is roughly along the lines of the stackoverflow db in the sense that we have a single Post table (contains both questions / answers)
Probably going to use something along the lines of the following repository interface:
public interface IPostRepository
{
void PutPost(Post post);
void PutPosts(IEnumerable<Post> posts);
void ChangePostStatus(string postID, PostStatus status);
void DeleteArtefact(string postId, string artefactKey);
void AddArtefact(string postId, string artefactKey);
void AddTag(string postId, string tagValue);
void RemoveTag(string postId, string tagValue);
void MarkPostAsAccepted(string id);
void UnmarkPostAsAccepted(string id);
IQueryable<Post> FindAll();
IQueryable<Post> FindPostsByStatus(PostStatus postStatus);
IQueryable<Post> FindPostsByPostType(PostType postType);
IQueryable<Post> FindPostsByStatusAndPostType(PostStatus postStatus, PostType postType);
IQueryable<Post> FindPostsByNumberOfReplies(int numberOfReplies);
IQueryable<Post> FindPostsByTag(string tag);
}
My question is:
Where / how would i fit solr into this for better querying of these "Posts"
(I'll be using solrnet for the actual communication with Solr)
Ideally, I'd be using the SQL db as merely a persistant store-
The bulk of the above IQueryable operations would move into some kind of SolrFinder class (or something like that)
The Body property is the one that causes the problems currently - it's fairly large, and slows down queries on sql.
My main problem is, for example, if someone "updates" a post - adds a new tag, for example, then that whole post will need re-indexing.
Obviously, doing this will require a query like this:
"SELECT * FROM POST WHERE ID = xyz"
This will of course, be very slow.
Solrnet has an nHibernate facility- but i believe this will be the same result as above?
I thought of a way around this, which I'd like your views on:
Adding the ID to a queue (amazon sqs or something - i like the ease of use with this)
Having a service (or bunch of services) somewhere that do the above mentioned query, construct the document, and re-add it to solr.
Another problem I'm having with my design:
Where should the "re-indexing" method(s) be called from?
The MVC controller? or should i have a "PostService" type class, that wraps the instance of IPostRepository?
Any pointers are greatly received on this one!
On the e-commerce site that I work for, we use Solr to provide fast faceting and searching of the product catalog. (In non-Solr geek terms, this means the "ATI Cards (34), NVIDIA (23), Intel (5)" style of navigation links that you can use to drill-down through product catalogs on sites like Zappos, Amazon, NewEgg, and Lowe's.)
This is because Solr is designed to do this kind of thing fast and well, and trying to do this kind of thing efficiently in a traditional relational database is, well, not going to happen, unless you want to start adding and removing indexes on the fly and go full EAV, which is just cough Magento cough stupid. So our SQL Server database is the "authoritative" data store, and the Solr indexes are read-only "projections" of that data.
You're with me so far because it sounds like you are in a similar situation. The next step is determining whether or not it is OK that the data in the Solr index may be slightly stale. You've probably accepted the fact that it will be somewhat stale, but the next decisions are
How stale is too stale?
When do I value speed or querying features over staleness?
For example, I have what I call the "Worker", which is a Windows service that uses Quartz.NET to execute C# IJob implementations periodically. Every 3 hours, one of these jobs that gets executed is the RefreshSolrIndexesJob, and all that job does is ping an HttpWebRequest over to http://solr.example.com/dataimport?command=full-import. This is because we use Solr's built-in DataImportHandler to actually suck in the data from the SQL database; the job just has to "touch" that URL periodically to make the sync work. Because the DataImportHandler commits the changes periodically, this is all effectively running in the background, transparent to the users of the Web site.
This does mean that information in the product catalog can be up to 3 hours stale. A user might click a link for "Medium In Stock (3)" on the catalog page (since this kind of faceted data is generated by querying SOLR) but then see on the product detail page that no mediums are in stock (since on this page, the quantity information is one of the few things not cached and queried directly against the database). This is annoying, but generally rare in our particularly scenario (we are a reasonably small business and not that high traffic), and it will be fixed up in 3 hours anyway when we rebuild the whole index again from scratch, so we have accepted this as a reasonable trade-off.
If you can accept this degree of "staleness", then this background worker process is a good way to go. You could take the "rebuild the whole thing every few hours" approach, or your repository could insert the ID into a table, say, dbo.IdentitiesOfStuffThatNeedsUpdatingInSolr, and then a background process can periodically scan through that table and update only those documents in Solr if rebuilding the entire index from scratch periodically is not reasonable given the size or complexity of your data set.
A third approach is to have your repository spawn a background thread that updates the Solr index in regards to that current document more or less at the same time, so the data is only stale for a few seconds:
class MyRepository
{
void Save(Post post)
{
// the following method runs on the current thread
SaveThePostInTheSqlDatabaseSynchronously(post);
// the following method spawns a new thread, task,
// queueuserworkitem, whatevever floats our boat this week,
// and so returns immediately
UpdateTheDocumentInTheSolrIndexAsynchronously(post);
}
}
But if this explodes for some reason, you might miss updates in Solr, so it's still a good idea to have Solr do a periodic "blow it all away and refresh", or have a reaper background Worker-type service that checks for out-of-date data in Solr everyone once in a blue moon.
As for querying this data from Solr, there are a few approaches you could take. One is to hide the fact that Solr exists entirely via the methods of the Repository. I personally don't recommend this because chances are your Solr schema is going to be shamelessly tailored to the UI that will be accessing that data; we've already made the decision to use Solr to provide easy faceting, sorting, and fast display of information, so we might as well use it to its fullest extent. This means making it explicit in code when we mean to access Solr and when we mean to access the up-to-date, non-cached database object.
In my case, I end up using NHibernate to do the CRUD access (loading an ItemGroup, futzing with its pricing rules, and then saving it back), forgoing the repository pattern because I don't typically see its value when NHibernate and its mappings are already abstracting the database. (This is a personal choice.)
But when querying on the data, I know pretty well if I'm using it for catalog-oriented purposes (I care about speed and querying) or for displaying in a table on a back-end administrative application (I care about currency). For querying on the Web site, I have an interface called ICatalogSearchQuery. It has a Search() method that accepts a SearchRequest where I define some parameters--selected facets, search terms, page number, number of items per page, etc.--and gives back a SearchResult--remaining facets, number of results, the results on this page, etc. Pretty boring stuff.
Where it gets interesting is that the implementation of that ICatalogSearchQuery is using a list of ICatalogSearchStrategys underneath. The default strategy, the SolrCatalogSearchStrategy, hits SOLR directly via a plain old-fashioned HttpWebRequest and parsing the XML in the HttpWebResponse (which is much easier to use, IMHO, than some of the SOLR client libraries, though they may have gotten better since I last looked at them over a year ago). If that strategy throws an exception or vomits for some reason, then the DatabaseCatalogSearchStrategy hits the SQL database directly--although it ignores some parameters of the SearchRequest, like faceting or advanced text searching, since that is inefficient to do there and is the whole reason we are using Solr in the first place. The idea is that usually SOLR is answering my search requests quickly in full-featured glory, but if something blows up and SOLR goes down, then the catalog pages of the site can still function in "reduced-functionality mode" by hitting the database with a limited feature set directly. (Since we have made explicit in code that this is a search, that strategy can take some liberties in ignoring some of the search parameters without worrying about affecting clients too severely.)
Key takeaway: What is important is that the decision to perform a query against a possibly-stale data store versus the authoritative data store has been made explicit--if I want fast, possibly stale data with advanced search features, I use ICatalogSearchQuery. If I want slow, up-to-date data with the insert/update/delete capability, I use NHibernate's named queries (or a repository in your case). And if I make a change in the SQL database, I know that the out-of-process Worker service will update Solr eventually, making things eventually consistent. (And if something was really important, I could broadcast an event or ping the SOLR store directly, telling it to update, possibly in a background thread if I had to.)
Hope that gives you some insight.
We use solr to query a large product database.
Around 1 million products, and 30 stores.
What we did is we used triggers on the product table and stock tables on our Sql server.
Each time a row is changed it flags the product to be reindexed. And we have a windows service that grabs these products and post them to Solr every 10 seconds. (With a limit of 100 products per batch).
It's super efficient, almost real time info for the stock.
If you have a big text field (your 'body' field), then yes, re-index in background. The solutions you mentioned (queue or periodic background service) will do.
MVC controllers should be oblivious of this process.
I noticed you have IQueryables in your repository interface. SolrNet does not currently have a LINQ provider. Anyway, if those operations are all you're going to do with Solr (i.e. no faceting), you might want to consider using Lucene.Net instead, which does have a LINQ provider.

Resources