I'm refactoring a web app to ensure that my entities are always initialized in a valid state. This means that I'm using DTOs for user input, and map those DTOs to my entities after validation.
However, some of the properties of the DTOs are not directly mappable to the properties of the entities. If a DTO contains a base64 encoded image and the entity requires a URL to the image file, I need to save the base64 to a file in the mapper in order to assign the URL of that file to the entity.
It could just be me, but it feels like this kind of stuff doesn't belong inside a DTO to entity mapper. Are there reasons why this might be a bad idea? What strategies are commonly used for this kind of mapping?
Seems to me that in your case you don't have a simple mapping process from DTOs to Entities because you have application logic in the process. Storing images somewhere and getting an URL/Path for that image is application specific logic, so you probably need a Service for it.
Applications usually have some kind of tasks or operations that it needs to perform and define the flow of the application. One way to define this flow is by using Commands and attach the DTO's to these Commands.
For example let's say you have a registration process so the user must enter some data and you need to create an Account Entity.
In case of web app, the frontend will have to collect user information and create and send a command to the backend. In this case you will have RegisterUserCommand. This command will contain a UserInfo DTO property, or will have properties for the user info. For example:
RegisterUserCommand {
string UserName
string FirstName;
string LastName;
Image Avatar;
}
The next thing you need is a RegisterUserCommandService or (RegisterUserCommandHandler depending on your taste and terminology used) that will process/handle the Command. You also need a StorageProvider that will provider service operations for storing and retrieving images (can be on the File System, Amazon S3, Dropbox etc.) and give you a link. Here is a sample pseudo code
RegisterUserCommandService {
Process(RegisterUserCommand cmd) {
avatarLink = storageProvider.Store(cmd.Avatar);
account = new Account(cmd.UserName, ...., avatarLink);
accountRepository.Save(account);
}
If you tell me more about your application I can provider an example for your specific case.
Here are some resources you can check:
https://cqrs.wordpress.com/documents/task-based-ui/
https://martinfowler.com/bliki/CommandOrientedInterface.html
https://weblogs.asp.net/shijuvarghese/cqrs-commands-command-handlers-and-command-dispatcher
Related
I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.
I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):
This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.
In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:
class _GFSFAPI: Service {
private init() {
configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
}
However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.
Any suggestions about how to approach this would be greatly appreciated.
Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.
Models in the Cache
Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)
Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.
Cache Lookup Keys
The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.
The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:
keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
by adding a siestaKey attribute and doing some kind of union query across models, or
by keeping a (cache key) → (model type, model ID) mapping outside of Realm.
I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.
I am developing a web based application using ASP.NET MVC. I am trying have rich domain models rather than the thin/anemic models.
I have modelled my solution along the lines of Onion architecture. The different projects are as below :
{}.Domain.Core - contains the domain objects and interfaces like IDbContext which is implemented in the Infrastructure layer
{}.Database - is the database prject
{].Infrastructure - contains implementation for logging, Data Access etc.
{}.Web - View and Controllers
**** The data access is done using dapper and IDbContext is a wrapper around 2 simple command, query interfaces. I have isolated each of the queries as separate class.
For sake of discussion I am taking a small part of the application.
I have a versioned document library which contains documents along with other metadata like tags, permissions etc
A simplified model of my document object is as shown below
I would want the operations to be defined within the domain object, since there is business logic involved in each of these operations.
Let me take "Delete" as an operation. The operation needs to be perform
Validate if user has permission to delete
Check if there are no associations which will get impacted by this delete
Check if no workflow is in progress
Delete the actual item from database in a transaction
As shown in above example, I would need the database context to complete this operation.
The way I have currently thinking if modeling is to have the domain object have IDbContext, which can execute the queries exposed.
In my controller class I call the domain objects and perform the operations.
I am not sure if passing the IDbContext in the domain object is ok? If not what are the better ways to model this?
I am not convinced in having a separate service layer because
1) Controller act as first layer of service layer for most of the cases
2) Service layer is just duplicating the same methods from domain to another class
Let me know how I can better this design.
Injecting the IDbContext like that brakes the main principle of the Domain model which should be responsible for business logic ONLY while retrieving and storing your domain entities is the responsibility of the infrastructure layer. Yes you inject it by interface, hiding the actual implementation but it makes you domain model aware of some storage.
Also the steps from above required to delete a Document doesn't entierly belong to the Document object. Let's consider the first step with user permissions and the following cases:
Users with Admin role should be allowed to delete any document
Document owner should be allowed to delete the document
For the first case there might not be a connection between a user and a document to remove. Admin users are just allowed to do anything. It's like a classical example with two bank accounts and an operation to tranfer money which involves both accounts but is not their responsibility. This is when Domain services come into place. Please don't confuse them with Service layer services. Domain services are part of the domain model and responsible for business logic only.
So if I were you, I would create a new Domain service with DeleteDocument method. This should do the first three steps from above accepting User and Document as parameters. The fourth step should be done by your repository. Not sure what you mean by saying
I didn’t see too much value in adding repositories
but from domain model perspective you already have one it's the IDbContext. I assume you meant some pattern to implement repository or separate repository for each entity. In the long run your pseudo code in the controller should be the following:
var user = bdContext<User>.SelectById(userId);
var document = bdContext<Document>.SelectById(docId);
var docService = new DocumentService();
docService.DeleteDocument(document, user); // Throw exception here if deletion is not allowed
bdContext<Document>.Delete(document);
If you expect you need this logic in many places of you application you can just wrap it up in a Service layer service.
I suggest reading Eric Evans book on DDD if you want to learn more about Domain modeling. This discusses the meaning of entities, value objects, domain services, etc. in detail.
ANSWER TO THE COMMENT:
Not really, the domain services are part of the domain, so both implementation and interface are part of the domain as well. The fact that two or more objects have to interact with each other is not enough for creating a domain service. Let's consider a flight booking system as an example. You have a Flight entity with different properties such as DepartureCity, ArrivalCity. Flight entity should also have a reference to a list of seats. Seat could be a separate entity as well with such
properties as Class (business, economy, etc.), Type (isle, row, middle), etc. So booking a seat requires interacting with different entites, such as Flight and Seat but we don't need a domain service here. As by nature Seat property makes no sense if not considered as a child object of a Flight. It's even very unlikely you would ever have a case to query a Seat entity from out of the Flight context. So reserving a Seat is responsibility of the Flight entity here and it's ok to place the reserving logic to the Flight class. Please note it's just an example to try and explain when we need to create domain services, a real system could be modeled completely another way. So just try following these three basic steps to decide whether or not you need a domain service:
The operation performed by the Service refers to a domain concept which does not naturally belong to an Entity or Value Object.
The operation performed refers to other objects in the domain.
The operation is stateless.
I'm accessing dbcontext from the controller which is application/service layer not domain/business layer. Domain model deals with business logic only, it should not be aware of any persistance logic and from the example above you can see that DocumentService has no references of the dbcontext.
A week back, I had an ASP.NET MVC application that called on a logical POCO service layer to perform business logic against entities. One approach I commonly used was to use AutoMapper to map a populated viewmodel to an entity and call update on the entity (pseudo code below).
MyEntity myEntity = myService.GetEntity(param);
Mapper.CreateMap<MyEntityVM, MyEntity>();
Mapper.Map(myEntityVM, myEntity);
this.myService.UpdateEntity(myEntity);
The update call would take an instance of the entity and, through a repository, call NHibernate's Update method on the entity.
Well, I recently changed my logical service layer into WCF Web Services. I've noticed that the link NHibernate makes with an entity is now lost when the entity is sent from the service layer to my application. When I try to operate against the entity in the update method, things are in NHibernate's session that shouldn't be and vice-versa - it fails complaining about nulls on child identifiers and such.
So my question...
What can I do to efficiently take input from my populated viewmodel and ultimately end up modifying the object through NHibernate?
Is there a quick fix that I can apply with NHibernate?
Should I take a different approach in conveying the changes from the application to the service layer?
EDIT:
The best approach I can think of right now, is to create a new entity and map from the view model to the new entity (including the identifier). I would pass that to the service layer where it would retrieve the entity using the repository, map the changes using AutoMapper, and call the repository's update method. I will be mapping twice, but it might work (although I'll have to exclude a bunch of properties/children in the second mapping).
No quick fix. You've run into the change tracking over the wire issue. AFAIK NHibernate has no native way to handle this.
These may help:
https://forum.hibernate.org/viewtopic.php?f=25&t=989106
http://lunaverse.wordpress.com/2007/05/09/remoting-using-wcf-and-nhibernate/
In a nutshell your two options are to adjust your service to send state change information over the Nhibernate can read or load the objects, apply the changes and then save in your service layer.
Don't be afraid of doing a select before an update inside your service. This is good practice anyway to prevent concurrency issues.
I don't know if this is the best approach, but I wanted to pass along information on a quick fix with NHibernate.
From NHibernate.xml...
<member name="M:NHibernate.ISession.SaveOrUpdateCopy(System.Object)">
<summary>
Copy the state of the given object onto the persistent object with the same
identifier. If there is no persistent instance currently associated with
the session, it will be loaded. Return the persistent instance. If the
given instance is unsaved or does not exist in the database, save it and
return it as a newly persistent instance. Otherwise, the given instance
does not become associated with the session.
</summary>
<param name="obj">a transient instance with state to be copied</param>
<returns>an updated persistent instance</returns>
</member>
It's working although I haven't had time to examine the database calls to see if it's doing exactly what I expect it to do.
I've been learning the ASP.NET MVC framework using the Apress book "Pro ASP.NET MVC Framework" by Steven Sanderson. To that end I have been trying out a few things on a project that I am not that familar with but are things that I thing I should be doing, namely:
Using repository pattern to access my database and populate my domain/business objects.
Use an interface for the repository so it can be mocked in a test project.
Use inversion of control to create my controllers
I have an MVC web app, domain library, test library.
In my database my domain items have an Id represented as an int identity column. In my domain classes the setter is internal so only the repository can set it.
So my quandries/problems are:
Effectively all classes in the domain library can set the Id property, not good for OOP as they should be read-only.
In my test library I create a fake repository. However since it's a different assembly I can't set the Id properties on classes.
What do others do when using a database data store? I imagine that many use an integer Id as unique identifier in the database and would then need to set it the object but not by anything else.
Can't you set your objects' IDs during construction and make them read-only, rather than setting IDs through a setter method?
Or do you need to set the ID at other times. If that's the case, could you explain why?
EDIT:
Would it be possible to divorce the ID and the domain object? Does anything other than the repository need to know the ID?
Remove the ID field from your domain object, and have your repository implementations track object IDs using a private Dictionary. That way anyone can create instances of your domain objects, but they can't do silly things with the IDs.
That way, the IDs of the domain objects are whatever the repository implementation decides they are - they could be ints from a database, urls, or file names.
If someone creates a new domain object outside of the repository and say, tried to save it to your repository, you can look up the ID of the object and save it as appropriate. If the ID isn't there, you can either throw an exception to say you need to create the object using a repository method, or create a new ID for it.
Is there anything that would stop you from using this pattern?
you can use the InternalsVisibleTo attribute. It will allow the types from an assembly to be visible from the tests (provided they are in different assemblies).
Otherwise you can leave the property read-only for the external objects but in the same time have a constructor which has an ID parameter and sets the ID property. Then you can call that constructor.
Hope this helps.
Let's start with this basic scenario:
I have a bunch of Tables that are essentially rarely changed Enums (e.g. GeoLocations, Category, etc.) I want to load these into my EF ObjectContext so that I can assign them to entities that reference them as FK. These objects are also used to populate all sorts of dropdown controls. Pretty standard scenarios so far.
Since a new controller is created for each page request in MVC, a new entity context is created and these "enum" objects are loaded repeatedly. I thought about using a static context object across all instances of controllers (or repository object).
But will this require too much locking and therefore actually worsen perf?
Alternatively, I'm thinking of using a static context only for read-only tables. But since entities that reference them must be in the same context anyway, this isn't any different from the above.
I also don't want to get into the business of attaching/detaching these enum objects. Since I believe once I attach a static enum object to an entity, I can't attach it again to another entity??
Please help, I'm quite new to EF + MVC, so am wondering what is the best approach.
Personally, I never have any static Context stuff, etc. For me, when i call the database (CRUD) I use that context for that single transaction/unit of work.
So in this case, what you're suggesting is that you wish to retrieve some data from the databse .. and this data is .. more or less .. read only and doesn't change / static.
Lookup data is a great example of this.
So your Categories never change. Your GeoLocations never change, also.
I would not worry about this concept on the database/persistence level, but on the application level. So, just forget that this data is static/readonly etc.. and just get it. Then, when you're in your application (ie. ASP.NET web MVC controller method or in the global.asax code) THEN you should cache this ... on the UI layer.
If you're doing a nice n-tiered MVC app, which contains
UI layer
Services / Business Logic Layer
Persistence / Database data layer
Then I would cache this in the Middle Tier .. which is called by the UI Layer (ie. the MVC Controller Action .. eg. public void Index())
I think it's important to know how to seperate your concerns .. and the database stuff is should just be that -> CRUD'ish stuff and some unique stored procs when required. Don't worry about caching data, etc. Keep this layer as light as possible and as simple as possible.
Then, your middle Tier (if it exists) or your top tier should worry about what to do with this data -> in this case, cache it because it's very static.
I've implemented something similar using Linq2SQL by retrieving these 'lookup tables' as lists on app startup and storing them in ASP's caching mechanism. By using the ASP cache, I don't have to worry about threading/locking etc. Not sure why you'd need to attach them to a context, something like that could easily be retrieved if necessary via the table PK id.
I believe this is as much a question of what to cache as how. When your are dealing with EF, you can quickly run into problems when you try to persist EF objects across different contexts and attempt to detach/attach those objects. If you are using your own POCO objects with custom t4 templates then this isn't an issue, but if you are using vanilla EF then you will want to create POCO objects for your cache.
For most simple lookup items (i.e numeric primary key and string text description), you can use Dictionary. If you have multiple fields you need to pass and back with the UI then you can build a more complete object model. Since these will be POCO objects they can then be persisted pretty much anywhere and any way you like. I recommend using caching logic outside of your MVC application such that you can easily mock the caching activity for testing. If you have multiple lists you need to cache, you can put them all in one container class that looks something like this:
public class MyCacheContainer
{
public Dictionary<int, string> GeoLocations { get; set; }
public List<Category> Categories { get; set; }
}
The next question is do you really need these objects in your entity model at all. Chances are all you really need are the primary keys (i.e. you create a dropdown list using the keys and values from the dictionary and just post the ID). Therefore you could potentially handle all of the lookups to the textual description in the construction of your view models. That could look something like this:
MyEntityObject item = Context.MyEntityObjects.FirstOrDefault(i => i.Id == id);
MyCacheContainer cache = CacheFactory.GetCache();
MyViewModel model = new MyViewModel { Item = item, GeoLocationDescription = GeoLocations[item.GeoLocationId] };
If you absolutely must have those objects in your context (i.e. if there are referential entities that tie 2 or more other tables together), you can pass that cache container into your data access layer so it can do the proper lookups.
As for assigning "valid" entities, in .Net 4 you can just set the foreign key properties and don't have to actually attach an object (technically you can do this in 3.5, but it requires magic strings to set the keys). If you are using 3.5, you might just try something like this:
myItem.Category = Context.Categories.FirstOrDefault(c => c.id == id);
While this isn't the most elegant solution and does require an extra roundtrip to the DB to get a category you don't really need, it works. Doing a single record lookup based on a primary key should not really be that big of a hit especially if the table is small like the type of lookup data you are talking about.
If you are stuck with 3.5 and don't want to make that extra round trip and you want to go the magic string route, just make sure you use some type of static resource and/or code generator for your magic strings so you don't fat finger them. There are many examples here that show how do assign a new EntityKey to a reference without going to the DB so I won't go into that on this question.