Breeze projection query from already-loaded entity - breeze

If I use breeze to load a partial entity:
var query = EntityQuery.from('material')
.select('Id, MaterialName, MaterialType, MaterialSubType')
.orderBy(orderBy.material);
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
function querySucceeded(data) {
var list = partialMapper.mapDtosToEntities(
manager, data.results, entityNames.material, 'id');
if (materialsObservable) {
materialsObservable(list);
}
log('Retrieved Materials from remote data source',
data, true);
}
...and I also want to have another slightly different partial query from the same entity (maybe a few other fields for example) then I'm assuming that I need to do another separate query as those fields weren't retrieved in the first query?
OK, so what if I want to use the same fields retrieved in the first query (Id, Materialname, MaterialType, MaterialSubType) but I want to call those fields different names in the second query (Materialname becomes just "name", MaterialType becomes "masterType" and so on) then is it possible to clone the partial entity I already have in memory (assuming it is in memory?) and rename the fields or do I still need to do a completely separate query?

I think I would "union" the two cases into one projection if I could afford to do so. That would simplify things dramatically. But it's really important to understand the following point:
You do not need to turn query projection results into entities!
Backgound: the CCJS example
You probably learned about the projection-into-entities technique from the CCJS example in John Papa's superb PluralSight course "Single Page Apps JumpStart". CCJS uses this technique for a very specific reason: to simplify list update without making a trip to the server.
Consider the CCJS "Sessions List" which is populated by a projection query. John didn't have to turn the query results into entities. He could have bound directly to the projected results. Remember that Knockout happily binds to raw data values. The user never edits the sessions on that list directly. If displayed session values can't change, turning them into observable properties is a waste of CPU.
When you tap on a Session, you go to a Session view/edit screen with access to almost every property of the complete session entity. CCJS needs the full entity there so it looks for the full (not partial) session in cache and, if not found, loads the entity from the server. Even to this point there is no particular value in having previously converted the original projection results into (partial) session entities.
Now edit the Session - change the title - and save it. Return to the "Sessions List"
Question
How do you make sure that the updated title appears in the Sessions List?
If we bound the Sessions List HTML to the projection data objects, those objects are not entities. They're just objects. The entity you edited in the session view is not an object in the collection displayed in the Sessions List. Yes, there is a corresponding object in the list - one that has the same session id. But it is not the same object.
Choices
#1: Refresh the list from the server by reissuing the projection query. Bind directly to the projection data. Note that the data consist of raw JavaScript objects, not entities; they are not in the Breeze cache.
#2: Publish an event after saving the real session entity; the subscribing "Sessions List" ViewModel hears the event, extracts the changes, and updates its copy of the session in the list.
#3: Use the projection-into-entity technique so that you can use a session entity everywhere.
Pros and Cons
#1 is easy to implement. But it requires a server trip every time you enter the Sessions List view.
One of the CCJS design goals was that, once loaded, it should be able to operate entirely offline with zero access to the server. It should work crisply when connectivity is intermittent and poor.
CCJS is your always-ready guide to the conference. It tells you instantly what sessions are available, when and where so you can find the session you want, as you're walking the halls, and get there. If you've been to a tech conference or hotel you know that the wifi is generally awful and the app is almost useless if it only works when it has direct access to the server.
#1 is not well suited to the intended operating environment for CCJS.
The CCJS Jumpstart is part way down that "server independent" path; you'll see something much closer to a full offline implementation soon.
You'll also lose the ability to navigate to related entities. The Sessions List displays each session's track, timeslot and room. That's repetitive information found in the "lookup" reference entities. You'll either have to expand the projection to include this information in a "flattened" view of the session (fatter payload) or get clever on the client-side and patch in the track, timeslot and room data by hand (complexity).
#2 helps with offline/intermittent connectivity scenarios. Of course you'll have to set up the messaging system, establish a protocol about saved entities and teach the Sessions List to find and update the affected session projection object. That's not super difficult - the Breeze EntityManager publishes an event that may be sufficient - but it would take even more code.
#3 is good for "server independence", has a small projection payload, is super-easy, and is a cool demonstration of breeze. You have to manage the isPartial flag so you always know whether the session in cache is complete. That's not hard.
It could get more complicated if you needed multiple flavors of "partial entity" ... which seems to be where you are going. That was not an issue in CCJS.
John chose #3 for CCJS because it fit the application objectives.
That doesn't make it the right choice for every application. It may not be the right choice for you.
For example, if you always have a fast, low latency connection, then #1 may be your best choice. I really don't know.
I like the cast-to-entity approach myself because it is easy and works so well most of the time. I do think carefully about that choice before I make it.
Summary
You do not have to turn projection query results into entities
You can bind to projected data directly, without Knockout observable properties, if they are read-only
Make sure you have a good reason to convert projected data into (partial) entities.
CCJS has a good reason to convert projected query data into entities. Do you?

Related

using restkit do you have to remap EVERYTHING each time something changes?

Right now I have a server that formats my data exactly how restkit wants it, and restkit just takes it and directly maps it to coredata.
This works fine, but when I start to accumulate a lot of data it becomes slow.
For example, I have one object called "stories" and each story contains an array of "posts". each time a new "post" gets added, I regenerate the "story" object to which the new post belongs to, and return the story object to the user for restkit to map. As a story starts to accumulate many posts, this process becomes very slow for restkit. I would prefer a way to just send back new posts, and then tell restkit "hey, add this post to the array of posts on this story", which is in contrast to what I do now which is more like "replace this story with this one I just returned, which includes all posts including any new or updated ones".
Is this possible within restkit? Am I better served just manipulating core data myself to support updates?
Yes, it's possible.
You can look at 'foreign key mapping' to connect your new posts to the existing story. The most important part is to set the relationship assignment type to Union because the default is replace.

ASP Mvc Nhibernate Issue

I am experiencing some bizarre problems with Nhibernate within my MVC web application.
There is not 1 consistent error, I keep getting loads of random ones:
Transaction not successfully started
New request is not allowed to start because it should come with valid transaction descriptor
Unexpected row count: -1; expected: 1
To give a little context to the setup, I am using Ninject to DI the sessions and other Nhibernate related objects, currently I am using RequestScope however I have tried SingletonScope. I have a large and complicated data model, which is read out as a whole, but persisted back in separate parts, as these can all be edited and saved individually.
An example would be having a Customer object, which contains a address object, a contact object, friends object, previous orders object etc etc...
So the whole object is read out, then mapped to the UI domain models and then displayed in different partials within the page. Each partial can be updated individually via ajax, so you may update 1 section or you could update them all together. It seems mainly to give me the problems when I try to persist them all together (so 2-4 simultanious ajax requests to persist chunks of the model).
Now I have integration tests that work fine, which just test the persistence and retrieval of entities. As a whole and individually and all pass fine, however in the web app they just seem to keep throwing random exceptions, and originally refused to persist outside of the Nhibernate cache. I found a way round this by wrapping most units of work within transactions, which got the data persisting but started adding new errors to the mix.
Originally I was thinking of just scrapping Nhibernate from the project, as although I really want its persistance/caching layer, it just didnt seem to be flexible enough for my domain, which seems odd as I have used it before without much problem, although it doesn't like 1-1 mappings.
So has anyone else had flakey transaction/nhibernate issues like this within an ASP MVC app... I know this may be a bit vague as the errors dont point to one thing, and it doesn't always error, so its like stabbing in the dark, but I am out of ideas so any help would be great!
-- Update --
I cannot post all relevant code as the project is huge, but the transaction bit looks like:
using (var transaction = sessionManager.Session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
try
{
// Do unit of work
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
throw;
}
}
Some of the main problems I have had on this project have stemmed from:
There are some 1-1 relationships with composite keys, but logically it makes sense
The Nhibernate domain entities go through a mapping layer to become the UI domain entities, then vice versa when saving. Problem here is that with the 1-1 mappings, when persisting the example Address I have to make a Surrogate Customer object with the correct Id then merge.
There is ALOT of Ajax that deals with chunks of the overall model (I talk like there is one single model, but there are quite a few top level models, just one that is most important)
Some notes that may help. I use windsor but imagine the concepts are the same. Sounds like there may be a combination of things.
SessionFactory should be created as singleton and session should be per web request. Something like:
Bind<ISessionFactory>()
.ToProvider<SessionFactoryBuilder>()
.InSingletonScope();
Bind<ISession>()
.ToMethod( context => context.Kernel.Get<ISessionFactory>().OpenSession() )
.InRequestScope();
Be careful of keeping transactions open for too long, keep them as short lived as possible to avoid deadlocks.
Check your queries are running as as expected by using a tool like NHProf. Often people load up too much of the graph which impacts performance and can create deadlocks.
Check your mappings for things like not.lazyload() and see if you actually need the additional data in the queries and keep results returned to a min. Check your queries execution plans and ensure adequate indexes are in place.
I have had issues with mvc3 action filters being cached, which meant transactions were not always started, but would attempt to be closed causing issues. Moved all my transaction commits into ActionResults in the controllers to keep transaction as short as possible and close to the action.
Check your cascades in your mappings and keep the updates to a minimum.

How can an ASP.NET MVC Action method access sub entities of an aggregate root?

I'm having trouble understanding how one would access the sub entities of an aggregate root. From answers to my previous question I now understand that I need to identify the aggregate roots of my model, and then only setup repositories which handle these root objects.
So say I have an Order object that contains Items. Items must exist within and Order so the Order is the aggregate root. But what if I want to include as part of my site an OrderItem details page? The URL to this page may be something like /Order/ItemDetails/1234, where 1234 is the ID of the OrderItem. Yet this would require that I retrieve an Item directly by ID, and because it is not an aggregate root I should not have a OrderItemRepository that can retrive an OrderItem by ID.
Since I want to work with OrderItems independent of an Orders does that imply that OrderItem is not actually an aggregate of Order but another aggregate root?
I don't know your business rules, of course, but I can't think of a case where you would have an orderitem that doesn't have an order. Not saying you wouldn't want to "work with one" by itself, but it still has to have an order, imo, and the order is sort of in charge of the relationship; e.g. you would represent all this by adding or deleting items from an order.
In situations like this, I usually will still require access to the items through the order. It's pretty easy to setup, in URLs I would just do /order/123/item/456. Or, if item ordering is stored / important (which it normally is stored at least indirectly via the order of entry), you could do /order/123/item/1 to retrieve the first item on the order.
In the controller, then, I just retrieve the order from the OrderRepository and then access the appropriate item from there.
All that said, I do agree w/ Arnis that you don't always have to follow this pattern at all. It's a case-by-case thing that you should evaluate the tradeoffs before doing it.
In Your case, I would retrieve OrderItem directly by URL /OrderItem/1234.
I personally don't try to abstract persistence (I don't use repository pattern). Also - I don't follow repository per aggregate root principle. But I do isolate domain model from persistence.
Main reason for that is - it's near-impossible to abstract persistence mechanisms completely. It's a leaky abstraction (e.g. try specifying eager/lazy loading for ORM that lives underneath w/o polluting repository API).
Another reason - it does not matter that much in what way You report data. Reporting part is boring and relatively unimportant. Real value of application is what it can do - automation of processes. So it's much more important how Your application behaves, how it manages to stay consistent, how objects interact etc.
When thinking about this problem, it's good to remember Law of Demeter. The point is - it should be applied only if we explicitly want to hide internals. In Your case - we don't want to hide order items.
So - exploiting fact that we know that entity Ids are globally unique (as opposed to unique only in Order context) it's just a short-cut and there is nothing wrong with retrieving them directly.
Interestingly enough - this can be pushed forward.
Even behavior encapsulation can and should be loosened up too.
E.g. - it makes more sense to have orderItem.EditComments("asdf") than order.EditOrderItemComments(order.OrderItems[0], "asdf").

Where / How to fit Solr into ASP.net MVC app (using nHibernate / Repository Pattern)

I'm currently in the middle of a reasonably large question / answer based application (kind of like stackoverflow / answerbag.com)
We're using SQL (Azure) and nHibernate for data access and MVC for the UI app.
So far, the schema is roughly along the lines of the stackoverflow db in the sense that we have a single Post table (contains both questions / answers)
Probably going to use something along the lines of the following repository interface:
public interface IPostRepository
{
void PutPost(Post post);
void PutPosts(IEnumerable<Post> posts);
void ChangePostStatus(string postID, PostStatus status);
void DeleteArtefact(string postId, string artefactKey);
void AddArtefact(string postId, string artefactKey);
void AddTag(string postId, string tagValue);
void RemoveTag(string postId, string tagValue);
void MarkPostAsAccepted(string id);
void UnmarkPostAsAccepted(string id);
IQueryable<Post> FindAll();
IQueryable<Post> FindPostsByStatus(PostStatus postStatus);
IQueryable<Post> FindPostsByPostType(PostType postType);
IQueryable<Post> FindPostsByStatusAndPostType(PostStatus postStatus, PostType postType);
IQueryable<Post> FindPostsByNumberOfReplies(int numberOfReplies);
IQueryable<Post> FindPostsByTag(string tag);
}
My question is:
Where / how would i fit solr into this for better querying of these "Posts"
(I'll be using solrnet for the actual communication with Solr)
Ideally, I'd be using the SQL db as merely a persistant store-
The bulk of the above IQueryable operations would move into some kind of SolrFinder class (or something like that)
The Body property is the one that causes the problems currently - it's fairly large, and slows down queries on sql.
My main problem is, for example, if someone "updates" a post - adds a new tag, for example, then that whole post will need re-indexing.
Obviously, doing this will require a query like this:
"SELECT * FROM POST WHERE ID = xyz"
This will of course, be very slow.
Solrnet has an nHibernate facility- but i believe this will be the same result as above?
I thought of a way around this, which I'd like your views on:
Adding the ID to a queue (amazon sqs or something - i like the ease of use with this)
Having a service (or bunch of services) somewhere that do the above mentioned query, construct the document, and re-add it to solr.
Another problem I'm having with my design:
Where should the "re-indexing" method(s) be called from?
The MVC controller? or should i have a "PostService" type class, that wraps the instance of IPostRepository?
Any pointers are greatly received on this one!
On the e-commerce site that I work for, we use Solr to provide fast faceting and searching of the product catalog. (In non-Solr geek terms, this means the "ATI Cards (34), NVIDIA (23), Intel (5)" style of navigation links that you can use to drill-down through product catalogs on sites like Zappos, Amazon, NewEgg, and Lowe's.)
This is because Solr is designed to do this kind of thing fast and well, and trying to do this kind of thing efficiently in a traditional relational database is, well, not going to happen, unless you want to start adding and removing indexes on the fly and go full EAV, which is just cough Magento cough stupid. So our SQL Server database is the "authoritative" data store, and the Solr indexes are read-only "projections" of that data.
You're with me so far because it sounds like you are in a similar situation. The next step is determining whether or not it is OK that the data in the Solr index may be slightly stale. You've probably accepted the fact that it will be somewhat stale, but the next decisions are
How stale is too stale?
When do I value speed or querying features over staleness?
For example, I have what I call the "Worker", which is a Windows service that uses Quartz.NET to execute C# IJob implementations periodically. Every 3 hours, one of these jobs that gets executed is the RefreshSolrIndexesJob, and all that job does is ping an HttpWebRequest over to http://solr.example.com/dataimport?command=full-import. This is because we use Solr's built-in DataImportHandler to actually suck in the data from the SQL database; the job just has to "touch" that URL periodically to make the sync work. Because the DataImportHandler commits the changes periodically, this is all effectively running in the background, transparent to the users of the Web site.
This does mean that information in the product catalog can be up to 3 hours stale. A user might click a link for "Medium In Stock (3)" on the catalog page (since this kind of faceted data is generated by querying SOLR) but then see on the product detail page that no mediums are in stock (since on this page, the quantity information is one of the few things not cached and queried directly against the database). This is annoying, but generally rare in our particularly scenario (we are a reasonably small business and not that high traffic), and it will be fixed up in 3 hours anyway when we rebuild the whole index again from scratch, so we have accepted this as a reasonable trade-off.
If you can accept this degree of "staleness", then this background worker process is a good way to go. You could take the "rebuild the whole thing every few hours" approach, or your repository could insert the ID into a table, say, dbo.IdentitiesOfStuffThatNeedsUpdatingInSolr, and then a background process can periodically scan through that table and update only those documents in Solr if rebuilding the entire index from scratch periodically is not reasonable given the size or complexity of your data set.
A third approach is to have your repository spawn a background thread that updates the Solr index in regards to that current document more or less at the same time, so the data is only stale for a few seconds:
class MyRepository
{
void Save(Post post)
{
// the following method runs on the current thread
SaveThePostInTheSqlDatabaseSynchronously(post);
// the following method spawns a new thread, task,
// queueuserworkitem, whatevever floats our boat this week,
// and so returns immediately
UpdateTheDocumentInTheSolrIndexAsynchronously(post);
}
}
But if this explodes for some reason, you might miss updates in Solr, so it's still a good idea to have Solr do a periodic "blow it all away and refresh", or have a reaper background Worker-type service that checks for out-of-date data in Solr everyone once in a blue moon.
As for querying this data from Solr, there are a few approaches you could take. One is to hide the fact that Solr exists entirely via the methods of the Repository. I personally don't recommend this because chances are your Solr schema is going to be shamelessly tailored to the UI that will be accessing that data; we've already made the decision to use Solr to provide easy faceting, sorting, and fast display of information, so we might as well use it to its fullest extent. This means making it explicit in code when we mean to access Solr and when we mean to access the up-to-date, non-cached database object.
In my case, I end up using NHibernate to do the CRUD access (loading an ItemGroup, futzing with its pricing rules, and then saving it back), forgoing the repository pattern because I don't typically see its value when NHibernate and its mappings are already abstracting the database. (This is a personal choice.)
But when querying on the data, I know pretty well if I'm using it for catalog-oriented purposes (I care about speed and querying) or for displaying in a table on a back-end administrative application (I care about currency). For querying on the Web site, I have an interface called ICatalogSearchQuery. It has a Search() method that accepts a SearchRequest where I define some parameters--selected facets, search terms, page number, number of items per page, etc.--and gives back a SearchResult--remaining facets, number of results, the results on this page, etc. Pretty boring stuff.
Where it gets interesting is that the implementation of that ICatalogSearchQuery is using a list of ICatalogSearchStrategys underneath. The default strategy, the SolrCatalogSearchStrategy, hits SOLR directly via a plain old-fashioned HttpWebRequest and parsing the XML in the HttpWebResponse (which is much easier to use, IMHO, than some of the SOLR client libraries, though they may have gotten better since I last looked at them over a year ago). If that strategy throws an exception or vomits for some reason, then the DatabaseCatalogSearchStrategy hits the SQL database directly--although it ignores some parameters of the SearchRequest, like faceting or advanced text searching, since that is inefficient to do there and is the whole reason we are using Solr in the first place. The idea is that usually SOLR is answering my search requests quickly in full-featured glory, but if something blows up and SOLR goes down, then the catalog pages of the site can still function in "reduced-functionality mode" by hitting the database with a limited feature set directly. (Since we have made explicit in code that this is a search, that strategy can take some liberties in ignoring some of the search parameters without worrying about affecting clients too severely.)
Key takeaway: What is important is that the decision to perform a query against a possibly-stale data store versus the authoritative data store has been made explicit--if I want fast, possibly stale data with advanced search features, I use ICatalogSearchQuery. If I want slow, up-to-date data with the insert/update/delete capability, I use NHibernate's named queries (or a repository in your case). And if I make a change in the SQL database, I know that the out-of-process Worker service will update Solr eventually, making things eventually consistent. (And if something was really important, I could broadcast an event or ping the SOLR store directly, telling it to update, possibly in a background thread if I had to.)
Hope that gives you some insight.
We use solr to query a large product database.
Around 1 million products, and 30 stores.
What we did is we used triggers on the product table and stock tables on our Sql server.
Each time a row is changed it flags the product to be reindexed. And we have a windows service that grabs these products and post them to Solr every 10 seconds. (With a limit of 100 products per batch).
It's super efficient, almost real time info for the stock.
If you have a big text field (your 'body' field), then yes, re-index in background. The solutions you mentioned (queue or periodic background service) will do.
MVC controllers should be oblivious of this process.
I noticed you have IQueryables in your repository interface. SolrNet does not currently have a LINQ provider. Anyway, if those operations are all you're going to do with Solr (i.e. no faceting), you might want to consider using Lucene.Net instead, which does have a LINQ provider.

Nhibernate (and ORMs in General): work with Objects or ObjectIds?

This is something that has been pulling at me for a while. Consider a (MVC type) web application with an ORM (e.g. Nhiberate) as the data access layer.
On one hand - the OOP/Rich domain model hand - I feel I should be passing around (references to) the real objects I am talking about.
On the other hand - the DB/Web App hand - I feel that it is easier and more efficient just to pass the integer Ids of the objects rather than the object themselves.
Consider an ecommerce catalogue type application:
The user is logged in and navigates to a product page.
They post a comment.
The controller action tasked with persisting this comment has 3 pieces of information: a) The user id (from the auth cookie or wherever), b) The product id (probably from the querystring), and c) the comment text.
Now, what what is best practice here? Is it really worth inflating the user and product objects (e.g. by getting them from the repository, with all the DB work that entails) when we know that all they will be used for is so the ORM can read their IDs and set the appropriate foreign keys in the DB table that stores the comments?
What are peoples views on this? Perhaps web apps should be given a little more leway than other apps, due to their stateless nature? I imagine there will be 'it depends' answers, but maybe some people are purists about the issue.
This is a general question which probably is applicable to many platforms, but if giving examples I would prefer them to be ASP.NET MVC if possible.
Thank you.
NHibernate has the load operation (as opposed to doing a get) exactly for this reason.
session.Save(
new Comment
{
Text = commentTextFromScreen,
User = session.Load<User>(userID),
Product = session.Load<Product>(productID)
}
};
In the above example, you are telling NHibernate: I know these already exist in the database, so don't bother selecting them right now. NHibernate will return proxy objects for them and a select won't happen against the database as long as you don't attempt to access any properties on the objects.
For more info check out Ayende's blog post: The difference between Get, Load, and query by id.

Resources