My previous question made me think again about layers, repository, dependency injection and architectural stuff like this.
My architecture now looks like this:
I am using EF code first, so I just made POCO classes, and context. That creates db and model.
Level higher are business layer classes (Providers). I am using different provider for each domain... like MemberProvider, RoleProvider, TaskProvider etc. and I am making new instance of my DbContext in each of these providers.
Then I instantiate these providers in my controllers, get data and send them to Views.
My initial architecture included repository, which I got rid of because I was told that it just adds complexity, so why I don't just use EF only. I wanted to did that.. working with EF directly from controllers, but I have to write tests and it was a bit complicate with real database. I had to fake - mock data somehow. So I made an interface for each provider and made fake providers with hardcoded data in lists. And with this I got back to something, where I am not sure how to proceed correctly.
These things starts to be overcomplicated too quickly... many approaches and "pattterns"... it creates just too much noise and useless code.
Is there any SIMPLE and testable architecture for creating and ASP.NET MVC3 application with Entity Framework?
If you want to use TDD (or any other testing approach with high test coverage) and EF together you must write integration or end-to-end tests. The problem here is that any approach with mocking either context or repository just creates test which can test your upper layer logic (which uses those mocks) but not your application.
Simple example:
Let's define generic repository:
public interface IGenericRepository<TEntity>
{
IQueryable<TEntity> GetQuery();
...
}
And lets write some business method:
public IEnumerable<MyEntity> DoSomethingImportant()
{
var data = MyEntityRepo.GetQuery().Select((e, i) => e);
...
}
Now if you mock the repository you will use Linq-To-Objects and you will have a green test but if you run the application with Linq-To-Entities you will get an exception because select overload with indexes is not supported in L2E.
This was simple example but same can happen with using methods in queries and other common mistakes. Moreover this also affects methods like Add, Update, Delete usually exposed on repository. If you don't write a mock which will exactly simulate behavior of EF context and referential integrity you will not test your implementation.
Another part of story are problems with Lazy loading which can also hardly be detected with unit tests against mocks.
Because of that you should also introduce integration or end-to-end tests which will work against real database using real EF context ane L2E. Btw. using end-to-end tests is required to use TDD correctly. For writing end-to-end tests in ASP.NET MVC you can WatiN and possibly also SpecFlow for BDD but this will really add a lot of work but you will have your application really tested. If you want to read more about TDD I recommend this book (the only disadvantage is that examples are in Java).
Integration tests make sense if you don't use generic repository and you hide your queries in some class which will not expose IQueryable but returns directly data.
Example:
public interface IMyEntityRepository
{
MyEntity GetById(int id);
MyEntity GetByName(string name);
}
Now you can just write integration test to test implementation of this repository because queries are hidden in this class and not exposed to upper layers. But this type of repository is somehow considered as old implementation used with stored procedures. You will lose a lot of ORM features with this implementation or you will have to do a lot of additional work - for example add specification pattern to be able to define query in upper layer.
In ASP.NET MVC you can partially replace end-to-end tests with integration tests on controller level.
Edit based on comment:
I don't say that you need unit tests, integration tests and end-to-end tests. I say that making tested applications require much more effort. The amount and types of needed tests is dependent on the complexity of your application, expected future of the application, your skills and skills of other team members.
Small straighforward projects can be created without tests at all (ok, it is not a good idea but we all did it and at the end it worked) but once a project passes some treshold you can find that introducing new features or maintaining the project is very hard because you are never sure if it breaks something which already worked - that is called regression. The best defence against regression is good set of automated tests.
Unit tests help you to test method. Such tests should ideally cover all execution paths in the method. These tests should be very short and easy to write - to complicated part can be to set up dependencies (mocks, faktes, stubs).
Integration tests help you to test functionality accross multiple layers and usually accross multiple processes (application, database). You don't need to have them for everything, it is more about experience to select where they are helpful.
End-to-end tests are something like validation of use case / user story / feature. They should cover whole flow of the requirement.
It is not needed to test a feture multiple times - if you know that the feature is tested in end-to-end test you don't need to write integration test for the same code. Also if you know that method has only single execution path which is covered by integration test you don't need to write unit test for it. This works much better with TDD approach where you start with a big test (end-to-end or integration) and go deeper to unit tests.
Depending on your developement approach you don't have to start with multiple types of test from beginning but you can introduce them later as your application will become more complex. The exception is TDD/BDD where you should start to use at least end-to-end and unit tests before you even write single line of other code.
So you are asking the wrong question. The question is not what is simpler? The question is what will help you at the end and what complexity fits your application? If you want to have easily unit tested application and business logic you should wrap EF code to some other classes which can be mocked. But in the same time you must introduce other type of tests to ensure that EF code works.
I can't say you what approach will fit your environment / project / team / etc. But I can explain example from my past project:
I worked on the project for about 5-6 months with two collegues. The project was based on ASP.NET MVC 2 + jQuery + EFv4 and it was developed in incremental and iterative way. It had a lot of complicated business logic and a lot of complicated database queries. We started with generic repositories and high code coverage with unit tests + integration tests to validate mapping (simple tests for inserting, deleting, updating and selecting entity). After few months we found that our approach doesn't work. We had more then 1.200 unit tests, code coverage about 60% (that is not very good) and a lot of regression problems. Changing anything in EF model could introduce unexpected problems in parts which were not touched for several weeks. We found that we are missing integration tests or end-to-end tests for our application logic. The same conclusion was made on a parallel team worked on another project and using integration tests was considered as recommendation for new projects.
Does using repository pattern add complexity? In your scenario I don't think so. It makes TDD easier and your code more manageable. Try to use a Generic repository pattern for more separation and cleaner code.
If you want to find out more about TDD and design patterns in Entity Framework, take a look at: http://msdn.microsoft.com/en-us/ff714955.aspx
However it seems like you're looking for an approach to mock test Entity Framework. One solution would be using a virtual seed method to generate data on database initialization. Take a look at Seed section at: http://blogs.msdn.com/b/adonet/archive/2010/09/02/ef-feature-ctp4-dbcontext-and-databases.aspx
Also you can use some mocking frameworks. The most famous ones I know are:
Rhino Mocks
Moq
Typemock (Commercial)
To see a more complete list of .NET mocking frameworks, check out: https://stackoverflow.com/questions/37359/what-c-mocking-framework-to-use
Another approach would be to use an in-memory database provider like SQLite. Study more at Is there an in-memory provider for Entity Framework?
Finally, here are some good links about unit testing Entity Framework (Some links refer to Entity Framework 4.0. But you'll get the idea.):
http://social.msdn.microsoft.com/Forums/en/adodotnetentityframework/thread/678b5871-bec5-4640-a024-71bd4d5c77ff
http://mosesofegypt.net/post/Introducing-Entity-Framework-Unit-Testing-with-TypeMock-Isolator.aspx
What is the way to go to fake my database layer in a unit test?
What I do is I use a simple ISession and EFSession object, witch are easy to mock in my controller, easy to access with Linq and strongly typed. Inject with DI using Ninject.
public interface ISession : IDisposable
{
void CommitChanges();
void Delete<T>(Expression<Func<T, bool>> expression) where T : class, new();
void Delete<T>(T item) where T : class, new();
void DeleteAll<T>() where T : class, new();
T Single<T>(Expression<Func<T, bool>> expression) where T : class, new();
IQueryable<T> All<T>() where T : class, new();
void Add<T>(T item) where T : class, new();
void Add<T>(IEnumerable<T> items) where T : class, new();
void Update<T>(T item) where T : class, new();
}
public class EFSession : ISession
{
DbContext _context;
public EFSession(DbContext context)
{
_context = context;
}
public void CommitChanges()
{
_context.SaveChanges();
}
public void Delete<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new()
{
var query = All<T>().Where(expression);
foreach (var item in query)
{
Delete(item);
}
}
public void Delete<T>(T item) where T : class, new()
{
_context.Set<T>().Remove(item);
}
public void DeleteAll<T>() where T : class, new()
{
var query = All<T>();
foreach (var item in query)
{
Delete(item);
}
}
public void Dispose()
{
_context.Dispose();
}
public T Single<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new()
{
return All<T>().FirstOrDefault(expression);
}
public IQueryable<T> All<T>() where T : class, new()
{
return _context.Set<T>().AsQueryable<T>();
}
public void Add<T>(T item) where T : class, new()
{
_context.Set<T>().Add(item);
}
public void Add<T>(IEnumerable<T> items) where T : class, new()
{
foreach (var item in items)
{
Add(item);
}
}
/// <summary>
/// Do not use this since we use EF4, just call CommitChanges() it does not do anything
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="item"></param>
public void Update<T>(T item) where T : class, new()
{
//nothing needed here
}
If I want to switch from EF4 to let's say MongoDB, I only have to make a MongoSession that implement ISession...
I was having the same problem deciding on the general design of my MVC application. This CodePlex project by Shiju Varghese was alot of help. It is done in ASP.net MVC3, EF CodeFirst and also utilizes a service layer and a repository layer as well. Dependency injection is done using Unity. It is simple and very easy to follow. It is also backed by some 4 very nice blog posts. Its worth checking out. And, don't give up on the repository..yet.
Related
I am implementing a form of CQRS that uses a single data store but separate Query and Command models. For the command side of things I am implementing DDD including Repositories, IoC and Dependency Injection. For the Query side I am using the Finder pattern as described here. Basically, a finder is similar to a Repository, but with Find methods only.
So in my application for the read side, in my DAL, I use ADO.net and raw SQL to do my queries. The ADO.Net stuff is all abstracted away into a nice helper class so that my Finder classes simply pass the query to the ADO helper which returns generic data objects which the finder/mapper class turns into read models.
Currently the Finder methods, like my command repositories, are accessed through interfaces that are injected into my controllers, but I am wondering if the interfaces, DI and IoC are overkill for the query side, as everything I have read about the read side of CQRS recommends a "thin data layer".
Why not just access my Finders directly? I understand the arguments for interfaces and DI. ie Separation of Concerns and testability. In the case of SOC, my DAL has already separated out database specific logic by using a mapper class and putting the ADO.net stuff in a helper class. As far as testing is concerned, according to this question unit testing read models is not a necessity.
So in summary, for read models, can I just do this:
public class PersonController : Controller
{
public ActionResult Details(int id)
{
var person = new Person();
person = PersonFinder.GetByID(id);
// TODO: Map person to viewmodel
return this.View(viewmodel);
}
}
Instead of this:
public class PersonController : Controller
{
private IPersonFinder _person;
public PersonController(IPersonFinder person)
{
_person = person;
}
public ActionResult Details(int id)
{
Person person = _person.GetByID(id);
// TODO: Map person to viewmodel
return this.View(viewmodel);
}
}
Are you using both IoC and DI? That's bad ass! Anyways, the second version is the better one because it doesn't depend on a static class. Using statics will open Pandora's box, don't do it, for all the reasons that using static is bad.
You really don't get any benefits for using a static class and once you are already using a DI Container, there's no additional cost. And you are using the Finders directly but you let the DI Container instantiate one instead of you calling a static object.
Update
A thin read layer refers to using a simplified read model instead of the rich domain objects. It is unrelated to DI, it doesn't matter how the query service is built or by whom, it matters to not involve the business objects in queries.
Read/Write separation is completely unrelated to coding techniques like dependency injection. Your read models are serving fewer purposes than your combined read/write models were before. Could you consider ditching all the server-side code and just using your database's native REST API? Could you wire your controller to directly query the database with SQL and return the data as JSON? Do you need a generic repository-like pattern to deal with specific read requests?
I'm practicing DDD with ASP.NET MVC and come to a situation where my controllers have many dependencies on different services and repositories, and testing becomes very tedious.
In general, I have a service or repository for each aggregate root. Consider a page which will list a customer, along with it's orders and a dropdown of different packages and sellers. All of those types are aggregate roots. For this to work, I need a CustomerService, OrderService, PackageRepository and a UserRepository. Like this:
public class OrderController {
public OrderController(Customerservice customerService,
OrderService orderService, Repository<Package> packageRepository,
Repository<User> userRepository)
{
_customerService = customerService
..
}
}
Imagine the number of dependencies and constructor parameters required to render a more complex view.
Maybe I'm approaching my service layer wrong; I could have a CustomerService which takes care of all this, but my service constructor will then explode. I think I'm violating SRP too much.
I think I'm violating SRP too much.
Bingo.
I find that using a command processing layer makes my applications architecture cleaner and more consistent.
Basically, each service method becomes a command handler class (and the method parameters become a command class), and every query is also its own class.
This won't actually reduce your dependencies - your query will likely still require those same couple of services and repositories to provide the correct data; however, when using an IoC framework like Ninject or Spring it won't matter because they will inject what is needed up the whole chain - and testing should be much easier as a dependency on a specific query is easier to fill and test than a dependency on a service class with many marginally related methods.
Also, now the relationship between the Controller and its dependencies is clear, logic has been removed from the Controller, and the query and command classes are more focused on their individual responsibilities.
Yes, this does cause a bit of an explosion of classes and files. Employing proper Object Oriented Programming will tend to do that. But, frankly, what's easier to find/organize/manage - a function in a file of dozens of other semi-related functions or a single file in a directory of dozens of semi-related files. I think that latter hands down.
Code Better had a blog post recently that nearly matches my preferred way of organizing controllers and commands in an MVC app.
Well you can solve this issue easily by using the RenderAction. Just create separate controllers or introduce child actions in those controllers. Now in the main view call render actions with the required parameters. This will give you a nice composite view.
Why not have a service for this scenario to return a view model for you? That way you only have one dependency in the controller although your service may have the separate dependencies
the book dependency injection in .net suggests introducing "facade services" where you'd group related services together then inject the facade instead if you feel like you have too many constructor parameters.
Update: I finally had some available time, so I ended up finally creating an implementation for what I was talking about in my post below. My implementation is:
public class WindsorServiceFactory : IServiceFactory
{
protected IWindsorContainer _container;
public WindsorServiceFactory(IWindsorContainer windsorContainer)
{
_container = windsorContainer;
}
public ServiceType GetService<ServiceType>() where ServiceType : class
{
// Use windsor to resolve the service class. If the dependency can't be resolved throw an exception
try { return _container.Resolve<ServiceType>(); }
catch (ComponentNotFoundException) { throw new ServiceNotFoundException(typeof(ServiceType)); }
}
}
All that is needed now is to pass my IServiceFactory into my controller constructors, and I am now able to keep my constructors clean while still allowing easy (and flexible) unit tests. More details can be found at my blog blog if you are interested.
I have noticed the same issue creeping up in my MVC app, and your question got me thinking of how I want to handle this. As I'm using a command and query approach (where each action or query is a separate service class) my controllers are already getting out of hand, and will probably be even worse later on.
After thinking about this I think the route I am going to look at going is to create a SerivceFactory class, which would look like:
public class ServiceFactory
{
public ServiceFactory( UserService userService, CustomerService customerService, etc...)
{
// Code to set private service references here
}
public T GetService<T>(Type serviceType) where T : IService
{
// Determine if serviceType is a valid service type,
// and return the instantiated version of that service class
// otherwise throw error
}
}
Note that I wrote this up in Notepad++ off hand so I am pretty sure I got the generics part of the GetService method syntactically wrong , but that's the general idea. So then your controller will end up looking like this:
public class OrderController {
public OrderController(ServiceFactory factory) {
_factory = factory;
}
}
You would then have IoC instantiate your ServiceFactory instance, and everything should work as expected.
The good part about this is that if you realize that you have to use the ProductService class in your controller, you don't have to mess with controller's constructor at all, you only have to just call _factory.GetService() for your intended service in the action method.
Finally, this approach allows you to still mock services out (one of the big reasons for using IoC and passing them straight into the controller's constructor) by just creating a new ServiceFactory in your test code with the mocked services passed in (the rest left as null).
I think this will keep a good balance out the best world of flexibility and testability, and keeps service instantiation in one spot.
After typing this all out I'm actually excited to go home and implement this in my app :)
My application
I have an application design that looks like this:
web application layer - asp.net MVC app with controllers and views that use POCOs and call services
service layer - business processes that use POCOs and call repositories
data layer - repositories that use POCOs and communicate with the model in the form of EF model which is part of this same layer
POCO layer - defines all classes that are used for inter communication between these layers
So my data layer is completely transparent to data model implementation because upper layer don't use data entities at all.
Testing
As much as I understand unit, integration and system testing (with relation to Asp.net MVC) is this:
unit testing - this one's easy. Mock decoupled objects and inject them in your unit test so tested unit will use them
integration testing - should make a group of production functionality units and mock the rest: so writing integration tests that would test controller, service and repository integration without actually using production database
system testing - run tests on all layers without any mocking, meaning I have to use production (test) database as well
Problem
I can easily see how to write unit tests as well as system tests, but I don't know how to write integration tests? Maybe my view of these is completely distorted and I don't understand them at all.
How should one write integration and system tests for an Asp.net MVC application?
Or any .net application for that matter?
Some code that helps explain the problem
Suppose I have classes like:
TaskController calls into TaskService
TaskService calls into TaskRepository
TaskRepository manipulate EF data internally
So here are my (abbreviated) classes:
public class TaskController
{
private ITaskService service;
// injection constructor
public TaskController(ITaskService service)
{
this.service = service;
}
// default constructor
public TaskController() : this(new TaskService()) {}
public ActionResult GetTasks()
{
return View(this.service.GetTasks());
}
...
}
public class TaskService : ITaskService
{
private ITaskRepository repository;
// injection constructor
public TaskService(ITaskRepository repository)
{
this.repository = repository;
}
// default constructor
public TaskService() : this(new TaskRepository()) {}
public IList<Task> GetTasks()
{
return this.repository.GetTasks();
}
...
}
public class TaskRepository : ITaskRepository
{
public IList<Task> GetTasks()
{
// code that gets tasks from EF and converts to Task POCOs
}
...
}
Unit test is simple and would look like this:
public void UnitTest()
{
var mock = new Mock<ITaskService>();
// other code that mocks the service
TaskController controller = new TaskController(mock.Object);
// do the test
}
But when it comes to an integration test, how do I mock only certain parts of the integration.
public void IntegrationTest()
{
// no mocking at all
TaskController = new TaskController();
// do some testing
}
First of all I can't just mock database here? I could mock repository and have real service and controller though...
Integration tests should test the integration between components. While unit tests test individual pieces of a single component, integration tests the interactions between components, and are meant to work live. So an integration test would utilize a database and any other external dependencies, where its best to mock these services in unit testing.
System test to me would be functional testing (another level of testing using something like fit), or ui testing through a tool like testcomplete or telerik's QA tool.
HTH.
Integration tests that don't involve the UI can still be written in NUnit, xUnit, etc.
For ASP.NET MVC in particular (or any web application), you can use WatiN or Selenium to write system/integration tests using the UI.
You might also want to look at T.S.T. for T-SQL unit testing, and SpecFlow if you are curious about BDD in .NET.
Note: I wrote this before the question was updated with code and a description of the specific situation. It doesn't really address the question anymore, but hopefully will still be interesting/useful to someone.
I just replied to a related question: Integration tests implementation. I think it addressed the issue here, by presenting a clear approach to take.
Part of the issue is that higher level integration tests get too complex v. quickly. Its the nature of the problem, so I prefer to make sure all the separate pieces work as intended, through unit tests and focused integration tests depending on the class involved.
With the above in place, you only need to make sure the separate pieces are hooked correctly, so I use the full system tests for that. This has the best of its effect if the code follows SOLID and DRY.
This time I have a more philosopical question.
Most MVC tutorials/books seem to suggest to restrict the scope of one repository to one aspect of the model and set up multiple repositories to cover all model classes. (E.g.: ProjectRep, UserRep, ImageRep, all mapping onto the same db eventually.)
I can see how that would make unittesting simpler but I can't imagine how this would work in the real world, where most entities have relationships between each other. In the end I always find myself with one gigantic repository class per DB connection and an equally arkward FakeRepository for unittesting.
So, what's your opinion? Should I try harder to seperate out repositories? Does it even matter if the ProductRep refers to data in the UserRep and vice versa via PurchaseHistory? How would the different reps make sure they don't lock each other up when accessing the single db?
Thanks,
Duffy
This works in the real world because there's single engine under the hood of the repositories. This can be ORM like NHibernate or your own solution. This engine knows how to relate objects, lock database, cache requests, etc. But the business code shouldn't know these infrastructure details.
What you effectively do when you end up with one giantic repository is exposing this single engine to the real world instead. Thus you mess your domain code with engine-specific details that are otherwise hidden behind repository interfaces.
S#arp Architecture is a very good illustration of how repositories that Josh talk about are implemented and work. Also read the NHibernate best practices article that explains much of S#arp background.
And frankly, I can't imagine how your giantic repository works in the real world. Because the very idea looks bad and unmaintainable.
I've found that by using generics and an interface, you can avoid having to code many separate repositories. If your domain model is well-factored, you can often navigate object graphs using the domain objects rather than another repository.
public class MyDomainObject : IEntity //IEntity is an arbitrary interface for base ents
{
public int Id { get; set;}
public List<DiffObj> NavigationProp { get; set;}
//... and so on...
}
public interface IRepository<T> where T : IEntity, class, new()
{
void Inert(T entity);
T FindById(int id);
//... and so on
}
The usage is pretty simple, and by using IoC you can completely decouple implementation from the rest of your application:
public class MyBusinessClass
{
private IRepository<SomeDomainObject> _aRepo;
public MyBusinessClass(IREpository<SomeDomainObject> aRepo)
{
_aRepo = aRepo;
}
//...and so on
}
This makes writing unit tests a real snap.
I'm developing an application using asp.net mvc, NHibernate and DDD. I have a service layer that are used by controllers of my application. Everything are using Unity to inject dependencies (ISessionFactory in repositories, repositories in services and services in controllers) and works fine.
But, it's very common I need a method in service to get only object in my repository, like this (in service class):
public class ProductService {
private readonly IUnitOfWork _uow;
private readonly IProductRepository _productRepository;
public ProductService(IUnitOfWork unitOfWork, IProductRepository productRepository) {
this._uow = unitOfWork;
this._productRepository = productRepository;
}
/* this method should be exists in DDD ??? It's very common */
public Domain.Product Get(long key) {
return _productRepository.Get(key);
}
/* other common method... is correct by DDD ? */
public bool Delete(long key) {
usign (var tx = _uow.BeginTransaction()) {
try
{
_productRepository.Delete(key);
tx.Commit();
return true;
} catch {
tx.RollBack();
return false;
}
}
}
/* ... others methods ... */
}
This code is correct by DDD ? For each Service class I have a Repository, and for each service class need I do a method "Get" for an entity ?
Thanks guys
Cheers
Your ProductService doesn't look like it followed Domain-Driven Design principles. If I understand it correctly, it is a part of Application layer between Presentation and Domain. If so, the methods on ProductService should have business meaning with regard to products.
Let's talk about deleting products. Is it as simple as executing delete on the database (NHibernate, or whatever?) I think it is not. What about orders which reference the to-be-deleted product? And so on and so forth. Btw, Udi Dahan wrote a great article on deleting entities.
Bottom line is, if your application is so simple that services do really replicate your repositories and contain only CRUD operations, you probably shouldn't do DDD, throw away your repositories and let services operate on entities (which would be simple data containers in that case).
On the other hand, if there is a complicated behavior (like the one with handling 'deleted' products), there is a point in going DDD path and I strongly advocate doing so.
PS. Despite which approach (DDD or not) you will eventually take I would encourage you to use some Aspect Oriented Programming to handle transaction and exception related stuff. You would end up with way to many methods such as DeleteProduct with same TX and exception handling code.
That looks correct from my perspective. I really didn't like repeating service and repository method names over and over in my asp.net MVC project, so I went for a generic repository approach/pattern. This means that I really only need one or two Get() methods in my repository to retrieve my objects. This is possible for me because I am using Entity Framework and I just have my repository's get() method return a IQueryable. Then I can just do the following:
Product product = from p in _productRepository.Get() where p.Id == Id select p;
You can probably replicate this in NHibernate with linq -> NHibernate.
Edit: This works for DDD because this still allows me to interchange my DAL/repositories as long as the data library I am using (Nhibernate, EF, etc..) supports IQueryable.
I am not sure how to do a generic repository without IQueryable, but you might be able to use delegates/lambda functions to incorporate it.
Edit2: And just in case I didn't answer your question correctly, if you are asking if you are supposed to call your repository's Get() method from the service then yes, that is the correct DDD design as well. The reason is that the service layer is supposed to handle all your business logic, so it decides exactly how and what data to retrieve (for example, do you want it in alphabetical order, unordered, etc...). It also means that it can perform validation after loading if needed or validation before deleting and/or saving.
This means that the service layer doesn't care exactly how that data is stored and retrieved, it only decides what data is stored and retrieved. It then calls on the repository to handle the request correctly and retrieve/store the data in the way the service layer tells it to. Thus you have correct separation of concerns.