Caching Consistency vs. Static Object with nHibernate/ASP.NET - asp.net-mvc

I am a complete newbie to both caching, nhibernate, and everything involving the two, so this question may be excessively stupid.
I have certain instances of objects that are used by multiple other objects in my system. So for instance..
class Template {
// lots of data
}
class One {
IList<Template> Templates { get; set; }
}
class Two {
IList<Template> Templates { get; set; }
}
class Three {
IList<Template> Templates { get; set; }
}
Now, then, certain instances of Template are going to be used very, very frequently. (think like, every 20 seconds) and it includes a lot of things that need to be mathematically computed.
My question is basically which approach will yield the least stress on my database/server.
Am I best to just leave everything to Level 2 Caching in nHibernate? Or am I wiser to retrieve the Template object and store it in a static variable when my ASP.NET application starts up, and refresh this variable if it changes?
I've looked at some of the other similar questions around SO but I am still very much in the dark. Most of the documentation on caching assumes a good deal of knowledge on the subject, so I'm having a difficult time discerning what the optimal process is.

once every 20 second doesn't really sound very stressful. You need to weight the need for updated data vs the stress you can live with on your database.
2nd level cache won't necessarily help you in this case, since you use collections of objects. In order to know which object it needs, it still need to query the database, and if you do that it might even fetch the data anyway (unless it's a lot of raw data in the entities).
You basically have three different options:
1st level cache
For each connection/session that you make, NHibernate will always cache the unique entity that it has fetched. Every time you try to get a single entitity based in it's identifier (primary key), it will first check it's first level cache. This does not apply to collections of entities though, unless you can force NHibernate to only get "identifiers" for the collection and the get them one by one (usually very slow)
2nd level cache
This cache will available for each and every connection/session, and try to fetch the data from cache before it hits the database. Same rules apply as for the 1st level cache, that you can't get collections to an entity without querying the database unless it has already been loaded.
custom cache
You can always take care of caching your self, however, that way you need to model your classes accordingly (having Template objects stored, and the collections only keep track of the identifier instead of Template objects). If you refactor like this, 2nd and 1st level cache would still be equally useful though.
I will give you an example that shows you what I'm talking about:
if One contains templates with identifier [1,2,3,4]
Two contains templates with identifier [2,3]
Three contains templates with identifier [3,4,5]
In order for NHibernate to know that One needs templates 1,2,3,4, it needs to query the database. 1,2,3,4 will be cached individually here.
In order to actually know that Two needs entity 2 and 3, it still needs to query the database. It can't possibly know that 2,3 is also part of the collection in Two. Si it won't fetch them from cache, because it will select Template objects that belongs to class Two, hence full data. That is why caching won't help you here.
I think you need to give more details on what kind of data it is that you will be processing, and how it will be stored and updated in order to get an answer that is useful.

Static variables would be the less stress on your server, however that imposes some restrictions, specifically, it would be much harder to scale (web garden/farm), if you don't need to scale, that's the option you're looking for

Related

Efficiently get state from Orleans Grain

I have a grain in Orleans for the players of a game. A player has several properties that I want to access directly in the client. Is it possible, is it efficient and does it make sense to have these as public properties on the grain? Or, should I have a GetAllState method that returns a DTO with the current value of these properties in the grain?
public interface IPlayerGrain : IGrainWithIntegerKey
{
// Individual public properties to access grain state?
string Name { get; }
int Score { get; }
int Health { get; }
// Or, get all the current grain state as DTO?
Task<PlayerState> GetAllState();
}
From my current understanding I think I will need to use GetAllState as I think any communication into the grain needs to be via a method and this may pass between silos. So, you probably want to minimise the number of messages passed and wouldnt want to pass three messages to get Name, Score and Health. Or, is message passing pretty cheap and not something I should worry about doing too much? In my example I've only included 3 properties, but in my real game there will be many more.
However, I don't really like the idea of having an anemic DTO model that is just a copy of the grain's internal properties.
So I was wondering if there was a better way, or a preferred pattern for this sort of thing in Orleans?
I think this depends a lot on the life cycle and access patterns of the properties. Do the properties tend to change independently or together? (At first glance, they seem to change independently and at quite different rates; I assume that Score and Health can change very frequently, but Name would almost never change.) And given that Name changes very infrequently, would it be a good fit for your access patterns to retrieve it every time you wanted an updated Score or Health value? Maybe Score and Health would be frequently accessed together, and Name along with other more static properties also belong together.
If you let these kinds of questions drive your API before you think about the cost of message passing, you will probably find some good sweet spots (probably not the whole state of the grain). You might also consider having a Player grain and a PlayerStats grain that have different life cycles and that correspond more closely to the change rate of various pieces of data.
Introduction of a complex return type to minimize roundtrips is a valid solution.
However, I wouldn't return the whole internal state, as I assume that not all the clients need all the data all the time. It may also be a sign that you have a business logic implemented outside of the grains and you should move it into the grain.
You might also consider that Health and Score, which are likely to change frequently, be exposed in a stream.

Is core data performance better with less attributes?

I have a core data entity called DiveSite that has a large number of attributes of which many are booleans that represent features or conditions affecting a dive site.
In fact, I have so many attributes that xCode gives me a warning - "Misconfigured Entity - DiveSite has more than 100 properties; consider a more shallow entity hierarchy or denormalize properties"
Many of these properties could be grouped reducing the overall number of attributes on the entity - I could possibly change groups of booleans into a series of integers and do a logical and to check the factors i want.
I also realise that I could make these groups into separate entities - some of which would have a 1-1 relationship and some a 1-many relationship
In terms of performance would changing my DiveSite entity to have fewer attributes be a positive thing to do?
If so would it likely be better performance-wise to have separate entities or to have perhaps 6 attributes which I call using a predicate to filter on. ?
Thinking about it whilst phrasing this question, I realise that if I go the separate entities route I allow myself to add factors to some of the entities just by adding them as instances of the entity without changing my code.
I may have answered my own question as I write this but would appreciate the opinions of experience core data /and database users out there
Cheers
Yes it's adviced to keep your entities small. When you have a list view for example, you generally don't need all the information on the objects, but when you click one and go to the detail view, you would want to fetch more detailed information. Then you can fetch it from the other entities.
Of course, you should make relationships between these entities.
I can't say if it's a good practice to keep the entities "small" or not. But from my experience with Core Data, big entities aren't an issue.
By big, I mean an entity with 25 to 50 attributes, with per example a lot of long strings or binary data. The query time is, for entities of that size, more often than not, greater than the load and instantiation time. Fetching 1000 full entities in one fetch is usually faster than fetching 1000 partial entities then faulting 100 missing attributes.
On a side note, I must add I very rarely used entities of that size in a shipping product. Large entities are almost always refactored in several smaller related entities.
Now you told you reached something like 100 attributes. Wow. I think I never hit that mark in any of my projects - Core Data or any "classic" database. I would say the first issue here is readability & maintainability. I'm pretty sure you can refactor such a big entity in smaller ones, define the core attributes defining the principal entity, find some shared values here and there, etc. That would certainly helps.
Performance wise, as always, the answer lies in the profiler to accurately measure where the time is spent. Fetching too many happens, but fetching too little (aka loads of queries) happens more often in my experience.

Gave up DDD, but need some of its benefits

I'm giving up traditional DDD, which is often a massive timewaster, and forces me to do endless mapping: data layer <--> domain layer <--> presentation layer.
For even a small change I must change data models, domain models, presentation models / viewmodels, then the repositories, manager/service classes, and of course the AutoMapper maps, and then test the whole thing! Each call requires calling a layer which calls a layer which calls the underlying code. And I don't get anything in return other than "you might need it in the future". Meh.
My current approach is more pragmatic:
I don't worry about the difference between the "data layer" and "domain layer" any longer, as there's no point - the terms are interchangeable. I let EF do it's thing, and add interfaces and repositories on top when needed.
I've merged my "data" and "domain" projects (into "core", boring name, I know), and I could almost swear that Visual Studio is actually running faster.
I allow EF entities to go up and down the stack, but, I still map them to presentation models / viewmodels as usual.
For simple operations I call repositories directly from controllers, for complex operations I use domain managers/services as usual; the repositories never expose IQueryable.
I define entities/POCOs as partial classes, so I can add domain behavior separately in corresponding partial classes.
The problem: I now use the entities all over the place, so client code can see their navigation properties. And the models are always materialized after they leave a repository, so those navigation properties are often null.
Possible solutions:
1. Live with it. It's ugly but preferable to the problems explained above.
2. For each entity, define an interface which hides the navigation properties; and make client code use the interfaces. But ironically, this means another layer (albeit thin and manageable).
3. What else?
I'm not used to this sort of fast-and-loose programming style, so maybe I'm missing some obvious tricks. Is there anything else I should take into account? I'm sure there are other problems I will encounter soon.
EDIT:
This question is not about DDD. And note that many struggle with a traditional DDD approach -- Seemann appears to arrive at the same conclusion, Rahien speaks about the "Useless Abstraction For The Sake Of Abstraction Anti Pattern", and Evans himself said DDD is only truly useful in 5% of cases. Also see this thread. Some of the comments/answers are predictably about how I'm doing DDD wrong, or how I can tweak my system to do it right. However, I'm not asking about DDD or bashing it for the cases where it is suitable, rather I'd like to know what others are doing in line with the thinking I've described above. It's not as if DDD is a panacea to all design ills, every decade a new process comes out (RUP anyone? XP, Agile, Booch, blah...). DDD is just the shiniest new one, and the most well known and used. But pragmatism should come first as I'm trying to build salable products that ship on time and are easy to maintain. The most useful programming axiom I've learned, by far, is YAGNI. What I want is to change my system to a sort of "DDD-lite", where I get it's strong design/OOP/pattern philosophy, but without the fat.
A typical persistence approach with DDD is to map the domain model directly to corresponding tables. Technically, the mappings are still there (and are usually declared in code), but there is no explicit data model, as pointed out by lazyberezovsky.
The problem with navigation properties can be resolved in a few different ways, regardless of whether you are employing DDD or not. I dislike approach 1 because it makes it more difficult to reason about your code - you never know which properties will be set and which won't. Approach 2 is much better in theory, because it makes it very explicit what that a given query requires and making things explicit is a good practice in general. A similar, but simpler and less brittle approach is to use read-models, which are just objects designed to fulfill requirements of a given query of set of queries. Within the context of DDD, they allow you to decouple behavior rich entities from queries, which are quite often at odds. Now proponents of DRY may scream heresy and come at you with torches and pitchforks, but in practice it is often much easier to maintain a read-model and an entity then to try to coerce entities to fulfill query requirements by way of interfaces or complex mapping strategies. Additionally, the responsibilities of a read-model and a behavior model are quite different, therefore DRY isn't applicable.
This is not to say that DDD is applicable in your scenario. It is often a wise decision to avoid full fledged DDD, especially in scenarios that are mostly CRUD. You are correct to be cautious, a good example of KISS and YAGNI. DDD reaps benefits when your domain consists of complex behavior, not just data. At any rate, the read-model pattern applies.
UPDATE
For implementations that don't employ a read-model, take a look at Fetching Strategy Design where the notion of a fetching strategy allows the specification of exactly what is needed from the database which mitigates issues with navigational properties. The material referenced in the linked post is also of interest. Overall, this attempts to avoid the a layer of indirection present in other approaches. However, in my opinion, using the proposed fetching strategy is more complex than using a read-model while the net result is the same.
Some thoughts about this point:
... the repositories never expose IQueryable ... the models are always
materialized after they leave a repository ...
Your question is tagged with "asp.net-mvc", so you have a web application in mind. 90% or more of all requests will be GET requests that are supposed to fetch some data from the database and show those data in a web view. How often are those needed data really entities rather than only bags of properties (a selection of properties of an entity type or perhaps composed of properties from multiple entities)?
Say, your application has 100 views. Only a minority of these will show complete entities:
50 of them are list views that show selected data (a customer with ID and address, but without the customer's contact person, phone number and sales volume)
20 of them contain autocomplete text boxes to select a reference (the customer for an order, but only the customer's name and city is shown in the autocomplete list, not the rest of the address nor contact person, phone number and sales volume and only the first 5 hits are displayed)
1 is an edit view for a customer that shows everything, but not the sales volume
1 is a details view for a customer with his last five orders
1 is a details view for an order including order items including product of each item but without the product's supplier name
1 is the same view but specialized for the purchasing department that wants to see the supplier for each item and item's product with average supplier's lead time for the last three months.
1 is a view for the service department that shows the order with only the order items of product category "repair service"
1 view for the Human Resources department shows employees including a photo stored as a big blob
1 view for personnel planning department shows a short version of the employee without photo
etc., etc.
As a UI programmer I would have all kinds of data requirements to render a view with the examples above:
I need only a selection of properties
I need even different selections of the same entity's properties for different views
I need an order including all items but without a reference to a product
I need an order including all items (but not all properties of the items) and including a reference to a product and to a supplier (but not all supplier's properties)
I need an order including only a filtered list of order items
I need a customer including the last five orders, not all 3000 orders he ever had
I need an employee but please without the big blob image
etc., etc.
How to fulfill these requirements as a data access/repository/service developer?
I only provide a handful of methods and materialize entities: load order header, load order header with items, load order header with items and product, load order header with items and product and supplier, load customer header (throw 15 of the 20 properties away, dear UI developer, if you only need five properties), load customer header with all 3000 orders (throw 2995 away, dear UI developer, if you only need five), etc., etc. I return interfaces from the repositories that hide not loaded navigation properties.
I care about every detail that the UI needs: I create repository/service methods like GetFiveCustomerPropertiesForAutoComplete, GetCustomerWithLastFiveOrders, etc. etc. I return interfaces from the repositories that hide the properties (also scalar) I haven't loaded. Or I return "DTOs" that contain the requested properties. I change the repository/services and create new DTOs every day when a UI developer calls with a data requirement for the next view.
I return IQueryable<TEntity> from the repositories and tell the UI developer "create the LINQ query yourself to fetch the data you need for your views". (Next morning the DBA is complaining about hundreds of terrible performing database queries.)
I return "prepared" IQueryable<TEntity>s from the repositories/services that cover - for example - security concerns like applying Where clauses for the user's access rights or append a Where clause for a search term or apply a NoTracking option to the query. I tell the UI developer: "You are allowed to extend the query with a) projections (Select), b) paging (Take and Skip) and perhaps c) sorting (OrderBy) because I consider those three query parts as UI concerns. All other query requirements (filtering, joining, grouping, etc.) have to be implemented in the repository/service layer and are forbidden in the UI layer." The most important piece here are projections that materialize ViewModels directly through the LINQ/SQL query without intermediate mapping layer and without the overhead to load more than the needed columns/properties.
These are only some thoughts. Every approach has its benefits and downsides. Working in small teams where at least one or a few developers have an overview what is happening in both the repository/service and the UI/"projection" layer the last option works fine for me in my experience although it doesn't always work with the strict rules decribed (for example, the filter by product category for included order items of an order requires to apply a Where clause inside of the projection, i.e. in the UI layer). For POST requests and data modifications I would use DTOs that send to data collected from a view back to a service to be processed there.
For stricter separation of "query layer" and UI layer I would probably prefer something close to the second option, maybe not with an interface/DTO for every UI requirement, but somehow reduced to a set of DTOs for the most common requirements (with the price of a little overhead of sometimes unnecessarily loaded properties). However, I expect that to be more work than the last option due to the larger amount of necessary repository/service methods, the additional maintenance of (perhaps many) DTOs and the intermediate mapping between DTOs and ViewModels.
Personally I am concerned about materializing full entities, especially complex object graphs, when I don't need them 90% of the time. But my concern is not verified by extensive performance measurements proving that this approach is really a problem for a "normal" application that doesn't have special high performance needs.
How can anyone give you sound advice when we have no clue as to what it is you are building? In the grand scheme of things, you might be building the wrong solution (not saying you are). So do realize all we can relate to is technical design issues and similar past experiences.
Many people face your problem, indeed. The mapping is loose coupling tax in the land of static typing. Maybe a more dynamic language could solve some of your pain. Or maybe you might find virtue in automating more (DSL, MDA). You could also switch to client server instead.
Interfaces are not layers, rather abstractions. Use them wisely.
Personally, I'd never take these shortcuts. Been bitten too many times trying to skip steps. Logic starts popping up in odd places. If I have a data driven app to develop simple datasets come to mind, EF as well. But I don't call the objects aggregate or entity in the DDD sense, just entity in the ERD sense. Transactionscript might be a better fit than doing the partial method sprinkeling. As for read model objects, these are not layers of indirection.
Overall, I get the feeling, and it is just that, you're making a mess of things because you fight the mapping friction by taking on a dependency on objects that don't reveal the required shape (navigation properties that are null) thereby causing problems in a different area.
I'll just try to be short - we went for the method 2 - ie, add layer of interfaces that you use on the client. You can have EF generate them for you, just a little tweak of the .tt templates.
Yes, it creates (yet) another layer, but it's logic-free and adds no complexity. Of course, if your client needs to deserialize entities, you have to add (yet) another layer that will handle deserialization and reference both the entities definitions and the interfaces that he'll return to the client. But it's also thin, so we learned to live with it, because it turned out to work just fine, and the client really stays clean...
The problem: I now use the entities all over the place, so client code
can see their navigation properties.
I don't quite get why this is a problem and how it's related to EF entities in particular. By client code do you mean presentation layer code or any code consuming your entities ?
For UI code a simple solution is to define ViewModels that just don't expose these navigation properties (or only expose a few of them depending on the object graph depth your GUIs need).
For other code it's only normal to be able to see the navigation properties of entities. They are public for a reason. You can end up breaking the Law of Demeter if you abuse them, but it's a matter of developer discipline not to fall into that trap.
An entity contains its own contract - all code that has access to the entity is supposed to be able to use any part of this contract. If you feel like your entities are exposing too much and that you need to put interfaces on top of them to restrain access to certain parts, maybe it's just a different entity.
I don't worry about the difference between the "data layer" and "domain layer" any longer, as there's no point - the terms are
interchangeable. I let EF do it's thing, and add interfaces and
repositories on top when needed.
I've merged my "data" and "domain" projects (into "core", boring name, I know), and I could almost swear that Visual Studio is
actually running faster.
I allow EF entities to go up and down the stack, but, I still map them to presentation models / viewmodels as usual.
For simple operations I call repositories directly from controllers, for complex operations I use domain managers/services as
usual; the repositories never expose IQueryable.
I define entities/POCOs as partial classes, so I can add domain behavior separately in corresponding partial classes.
None of these things seems to be fundamentally anti-DDD to me, except data/domain separation.
Especially if you do database-first EF -DDD is clearly a domain-centric approach and you shouldn't define your tables before defining your entities. It's also not clear whether some of your domain entities talk to the database or EF directly (not DDD - and more generally, layered-architecture - compliant) or you systematically have data access objects in between (DDD compliant).

DDD Repository EF Performance

I was wondering how people who follow DDD get around potential performance issues with using EF and the repository pattern with returning an aggregate root with children.
e.g. Parent
----- Child A
Or even e.g. Parent
----- Child A
------- Child A2
If I bring back the aggregate root's data from the repository and use a navigational property EF then fires off another query because it is utlising lazy loading. This is a problem because we are experiencing 100+ queries when we are in a loop.
If I bring back the aggregate root's data from the repository with the children's data as well by using the 'Include' statements, this will bring back the childrens data from the repository with its parent. Then when I use the navigational properties no queries fire off because that data is already in memory.
The problem with the second approach is that some of our data for the child object can be quite big e.g. 100,000+ records.
Obviously I don't want to store 100,000+ records in memory for the child. We decided to use paging to select 10 at a time to get around this, but another issue is when we are trying to use calculations on the children like sum, total count etc but we can only do that in memory on the 10 records we have pulled back.
I know the DDD way is to pull back the object graph with all of its data in memory and then you traverse through the objects for the data you need to display.
There is a split in our team with some believing we should pull back the aggregate root and it's children together and some feel we should have a method on the aggregate root's repository that queries the childrens data directly and pulls back the child object.
I Just wondered how other people have solved the performance issues with large amounts of data being stored in memory with the parent/child.
If you have to deal with performance you must use the second approach with special method exposed on repository - that is the point of repository to provide you such methods otherwise you can use EF context / set directly.
Theory is nice if you work with theoretical data - once you have real data you must tweak theory to work in real world scenarios.
You can also check this article (there are three following articles on the blog). It does the second way but it pretends to be the first way. It works for Count but maybe you can use the idea for some other scenarios as well.
The DDD way isn't always to pull back all the data that is required. One technique we use a pattern called double dispatching. This is where you make your call to your aggregate roots' method (or domain service) with all the parameters it requires but also with it you pass in a 'query only' repository type interface parameter too. This enables the root or its children decide what extra data is required and when it's it should be returned by simply calling methods on this injected interface.
This approach adhere's to the DDD principals that states that aggregate roots should not be aware of repository implementation whilst providing an testable and highly performant domain code.

Can someone help me understand why an auto-identity (int) is bad when using NHibernate?

I've been seeing a lot of commentary (from an NHibernate perspective) about using Guid as opposed to an int (and presumably auto-id in the database), with the conclusion that using auto-identity breaks the UoW pattern.
This post has a short description of the issue, but it doesn't really tell me "why" it breaks the pattern (unless I'm misunderstanding which is likely the case.
Can someone enlighten me?
There are a few major reasons.
Using a Guid gives you the ability to identify a single entity across many databases, including six relational databases with the same schema but different data, a document database, etc. This becomes important any time you have more than one single place where data goes - and that means your case too: you have a dev database and a prod database, right?
Using a Guid gives NHibernate the ability to batch more statements together, perform more database work at the very end of the unit of work / transaction, and reduce the total number of roundtrips to the database, increasing performance as well as conferring other benefits.
Comment:
Random Guids do not create poor indexes - natively, they create poor clustered indexes. There are two solutions.
Use a partially sequential Guid. With NHibernate, this means using the guid.comb id generator rather than the guid id generator. guid.comb is partially sequential for good performance, but retains a very high degree of randomness.
Have your Guid primary key be a nonclustered index, and put a clustered index on another auto-incrementing column. You may decide to map this column, in which case you lose the benefit of better batching and fewer roundtrips, but you regain all the benefits of short numbers that fit easily in a URL. Or you may decide not to map this column and have it remain completely within the database, in which case you gain better performance for Guids as primary keys as well as better performance for NHibernate doing fewer roundtrips.
My take would be that the key breaking factor is that getting the auto-incremented value requires an actual write to the database, which nHibernate would have deferred or possibly never performed.
Using identity and in a parent-child scenario the database has round trip the database to get the ID of a parent so that it can associate the child correctly. This means that the parent has to be committed at this time. Should there be a problem with the child you would then need to delete the parent in order to exit the UoW correctly.

Resources