I am working on an asp.net mvc application and writing my unit tests BDD style.
Eg.
GetResource_WhenResourceFileExists_ShouldReturnResources()
But when I am writing tests for my controllers, I usually have two Methods with the same name. One without parameters for get requests and one with for posts. Does anybody have a good naming convention here to distinguish between the two?
I can think of:
1.
LogIn_WithParameters_ShouldReturnLogInView()
LogIn_WithoutParameters_WhenAuthenticationFailed_ShouldReturnLogInView()
LogIn_WithoutParameters_WhenAuthenticationPassed_ShouldReturnProfileRedirect()
2.
LogIn_Get_ShouldReturnLogInView()
LogIn_Post_WhenAuthenticationFailed_ShouldReturnLogInView()
LogIn_Post_WhenAuthenticationPassed_ShouldReturnProfileRedirect()
3.
LogIn_ShouldReturnLogInView()
LogIn_WhenCalledWithParametersAndAuthenticationFailed_ShouldReturnLogInView()
LogIn_WhenCalledWithParametersAndAuthenticationPassed_ShouldReturnProfileRedirect()
Any opinions?
I use the following format which works very well for me:
[TestFixture]
public class Log_in_with_parameters_should
{
[Test]
public void Return_the_log_in_view() {}
}
[TestFixture]
public class Log_in_without_parameters_should
{
[Test]
public void Return_the_log_in_view_when_the_authentication_failed() {}
[Test]
public void Redirect_to_the_profile_when_the_authentication_passed() {}
}
I think this is a perfect example of why rigid naming conventions for unit tests are unattractive.
Your proposed scheme will only work when you have two method overloads: one with and one without parameters. It doesn't extend to the scenario where you have more than one overload with different parameters.
Personally I prefer a much looser naming convention that can be summarized as
[Action][Will|Should|Is|...][Result]
This gives me the flexibility to name my tests
SutIsPathResolutionCommand
ExecuteWithNullEvaluationContextWillThrow
ExecuteWillAddDefaultClaimsTransformationServiceWhenNoConnectionServiceIsAvailable
I must admit that I rarely read the name of the test anyway. Instead, I read the specification of what it does (i.e. the test code). The name is just not that important to me.
One option, which I don't particularly like, is to give the controller actions different names, but to then rename them using the ActionName attribute:
public ActionResult Login() {
// ... code ...
return View();
}
[HttpPost]
[ActionName("Login")]
public ActionResult LoginPost(... some params ...) {
// ... more code ...
return View();
}
This essentially substitutes one problem (unit test naming) with another problem (harder to read controller code). Nevertheless, you might find this pattern appealing since it does solve the stated problem.
I use a similar naming convention to the one in your question i.e. method_scenario_expected
I think you should elaborate more on the "scenario" part - if you're passing parameters - let the reader know what is speacial about them.
Keep in mind that naming your tests this way is more "TDD oreinted" and no BDD - BDD tests names should be about rules and "behaviors:.
If you feel that the current naming convention does not hwlp the code readability - feel free to experiment around and find what works for you.
I may not be answering your question, but I want to share what I do.
I don't follow specific naming convention, but I try to give names which explains what test method is trying to test. Some cases where I need more explanation I add description [Test("This test evaluates how many questions were answered by specific user")].
One thing to make sure is the tests are more readable and quickly understandable.
Related
I'm reading through the ASP.NET 5 docs and was choking on the chapter of dependency injection.
I am recommended to write my controllers like so:
public class MyController: Controller
{
private readonly MyService _myService;
public MyController(MyService myService)
{
_myService = myService;
}
public IActionResult Index()
{
// use _myService
}
}
The short and direct version is discouraged:
public class MyController : Controller
{
public IActionResult Index()
{
var myService = (MyService)HttpContext.RequestServices.GetService(typeof(MyService));
}
}
The given reason is because allegedly the recommended version...
[...] yields classes that are easier to test (see Testing) and are more loosely coupled.
The linked testing chapter doesn't shed any light on this weird statement.
I didn't look at the sources, but I assume whatever constructs the controller is using HttpContext.RequestServices.GetService itself to deliver the dependency? Clearly a test can setup a different implementation for testing, and clearly that is the whole point of a DI framework, right?
The colossus (MyService)HttpContext.RequestServices.GetService(typeof(MyService)) is bad enough, but a small helper could fix that (was a simple Get<MyService>() really so hard?).
But that this excessive clutter is recommended for basically every controller and more is disturbing.
It's all the more puzzling as there already is a Microsoft DI framework with a proper usage, MEF:
public class MyController : Controller
{
[Import]
private MyService _myService;
public IActionResult Index()
{
// use _myService
}
}
Why not at least just take that one? What's going on here?
This isn't a ASP.NET Core specific solution. This is how just about every DI framework works. The most common approach is to have all the dependencies of a controller as constructor parameters. This makes it clear what services the controller uses. There are multiple alternative solutions, but the basic idea stays the same and there are multiple pros and cons to them.
Clearly a test can setup a different implementation for testing, and clearly that is the whole point of a DI framework, right?
This line isn't clear to me. What do you think the 'whole point of a DI framework ' is? This line suggest you only use it so you can use a different implementation for testing.
But that this excessive clutter is recommended for basically every controller and more is disturbing.
Excessive clutter? What if I want to use MyService in two (or more) functions? Should I use this:
public class MyController : Controller
{
public IActionResult Index()
{
var myService = (MyService)HttpContext.RequestServices.GetService(typeof(MyService));
}
public IActionResult Index2()
{
var myService = (MyService)HttpContext.RequestServices.GetService(typeof(MyService));
}
}
Or should I opt for the solution where I set it up in the constructor? Seems like an obvious choice to me. In such a small example it may look like clutter, but add 10 lines of code to it and you'll barely notice a small constructor and some variable declarations.
You can use it while testing. It's a way to quickly grab something from the container when you need it, but it should certainly not be part of the actual code. You're simply hiding the dependency from sight.
At last you suggest property injection. This is a valid solution. But an often used argument against it is that it hides the dependency. If you define it as a parameter in the constructor you can't hide it. Besides, a lot of DI frameworks don't even have support for property or method injection because of this.
If you want to use MEF in your project you are free to do so. But it should, in my opinion, not be the default DI framework for ASP.NET. What's available right now is more than sufficient to do most tasks. If you need more functionality you can always use a different DI framework like StructureMap or AutoFac.
In the end it all comes down to what works for you. But stating this is either bad design or bad documentation is just wrong. You are of course free to prove me wrong on this. You could improve the ASP.NET documentation and/or would prove that the concept of inversion of control is wrong and suggest a better solution.
I'm practicing DDD with ASP.NET MVC and come to a situation where my controllers have many dependencies on different services and repositories, and testing becomes very tedious.
In general, I have a service or repository for each aggregate root. Consider a page which will list a customer, along with it's orders and a dropdown of different packages and sellers. All of those types are aggregate roots. For this to work, I need a CustomerService, OrderService, PackageRepository and a UserRepository. Like this:
public class OrderController {
public OrderController(Customerservice customerService,
OrderService orderService, Repository<Package> packageRepository,
Repository<User> userRepository)
{
_customerService = customerService
..
}
}
Imagine the number of dependencies and constructor parameters required to render a more complex view.
Maybe I'm approaching my service layer wrong; I could have a CustomerService which takes care of all this, but my service constructor will then explode. I think I'm violating SRP too much.
I think I'm violating SRP too much.
Bingo.
I find that using a command processing layer makes my applications architecture cleaner and more consistent.
Basically, each service method becomes a command handler class (and the method parameters become a command class), and every query is also its own class.
This won't actually reduce your dependencies - your query will likely still require those same couple of services and repositories to provide the correct data; however, when using an IoC framework like Ninject or Spring it won't matter because they will inject what is needed up the whole chain - and testing should be much easier as a dependency on a specific query is easier to fill and test than a dependency on a service class with many marginally related methods.
Also, now the relationship between the Controller and its dependencies is clear, logic has been removed from the Controller, and the query and command classes are more focused on their individual responsibilities.
Yes, this does cause a bit of an explosion of classes and files. Employing proper Object Oriented Programming will tend to do that. But, frankly, what's easier to find/organize/manage - a function in a file of dozens of other semi-related functions or a single file in a directory of dozens of semi-related files. I think that latter hands down.
Code Better had a blog post recently that nearly matches my preferred way of organizing controllers and commands in an MVC app.
Well you can solve this issue easily by using the RenderAction. Just create separate controllers or introduce child actions in those controllers. Now in the main view call render actions with the required parameters. This will give you a nice composite view.
Why not have a service for this scenario to return a view model for you? That way you only have one dependency in the controller although your service may have the separate dependencies
the book dependency injection in .net suggests introducing "facade services" where you'd group related services together then inject the facade instead if you feel like you have too many constructor parameters.
Update: I finally had some available time, so I ended up finally creating an implementation for what I was talking about in my post below. My implementation is:
public class WindsorServiceFactory : IServiceFactory
{
protected IWindsorContainer _container;
public WindsorServiceFactory(IWindsorContainer windsorContainer)
{
_container = windsorContainer;
}
public ServiceType GetService<ServiceType>() where ServiceType : class
{
// Use windsor to resolve the service class. If the dependency can't be resolved throw an exception
try { return _container.Resolve<ServiceType>(); }
catch (ComponentNotFoundException) { throw new ServiceNotFoundException(typeof(ServiceType)); }
}
}
All that is needed now is to pass my IServiceFactory into my controller constructors, and I am now able to keep my constructors clean while still allowing easy (and flexible) unit tests. More details can be found at my blog blog if you are interested.
I have noticed the same issue creeping up in my MVC app, and your question got me thinking of how I want to handle this. As I'm using a command and query approach (where each action or query is a separate service class) my controllers are already getting out of hand, and will probably be even worse later on.
After thinking about this I think the route I am going to look at going is to create a SerivceFactory class, which would look like:
public class ServiceFactory
{
public ServiceFactory( UserService userService, CustomerService customerService, etc...)
{
// Code to set private service references here
}
public T GetService<T>(Type serviceType) where T : IService
{
// Determine if serviceType is a valid service type,
// and return the instantiated version of that service class
// otherwise throw error
}
}
Note that I wrote this up in Notepad++ off hand so I am pretty sure I got the generics part of the GetService method syntactically wrong , but that's the general idea. So then your controller will end up looking like this:
public class OrderController {
public OrderController(ServiceFactory factory) {
_factory = factory;
}
}
You would then have IoC instantiate your ServiceFactory instance, and everything should work as expected.
The good part about this is that if you realize that you have to use the ProductService class in your controller, you don't have to mess with controller's constructor at all, you only have to just call _factory.GetService() for your intended service in the action method.
Finally, this approach allows you to still mock services out (one of the big reasons for using IoC and passing them straight into the controller's constructor) by just creating a new ServiceFactory in your test code with the mocked services passed in (the rest left as null).
I think this will keep a good balance out the best world of flexibility and testability, and keeps service instantiation in one spot.
After typing this all out I'm actually excited to go home and implement this in my app :)
My previous question made me think again about layers, repository, dependency injection and architectural stuff like this.
My architecture now looks like this:
I am using EF code first, so I just made POCO classes, and context. That creates db and model.
Level higher are business layer classes (Providers). I am using different provider for each domain... like MemberProvider, RoleProvider, TaskProvider etc. and I am making new instance of my DbContext in each of these providers.
Then I instantiate these providers in my controllers, get data and send them to Views.
My initial architecture included repository, which I got rid of because I was told that it just adds complexity, so why I don't just use EF only. I wanted to did that.. working with EF directly from controllers, but I have to write tests and it was a bit complicate with real database. I had to fake - mock data somehow. So I made an interface for each provider and made fake providers with hardcoded data in lists. And with this I got back to something, where I am not sure how to proceed correctly.
These things starts to be overcomplicated too quickly... many approaches and "pattterns"... it creates just too much noise and useless code.
Is there any SIMPLE and testable architecture for creating and ASP.NET MVC3 application with Entity Framework?
If you want to use TDD (or any other testing approach with high test coverage) and EF together you must write integration or end-to-end tests. The problem here is that any approach with mocking either context or repository just creates test which can test your upper layer logic (which uses those mocks) but not your application.
Simple example:
Let's define generic repository:
public interface IGenericRepository<TEntity>
{
IQueryable<TEntity> GetQuery();
...
}
And lets write some business method:
public IEnumerable<MyEntity> DoSomethingImportant()
{
var data = MyEntityRepo.GetQuery().Select((e, i) => e);
...
}
Now if you mock the repository you will use Linq-To-Objects and you will have a green test but if you run the application with Linq-To-Entities you will get an exception because select overload with indexes is not supported in L2E.
This was simple example but same can happen with using methods in queries and other common mistakes. Moreover this also affects methods like Add, Update, Delete usually exposed on repository. If you don't write a mock which will exactly simulate behavior of EF context and referential integrity you will not test your implementation.
Another part of story are problems with Lazy loading which can also hardly be detected with unit tests against mocks.
Because of that you should also introduce integration or end-to-end tests which will work against real database using real EF context ane L2E. Btw. using end-to-end tests is required to use TDD correctly. For writing end-to-end tests in ASP.NET MVC you can WatiN and possibly also SpecFlow for BDD but this will really add a lot of work but you will have your application really tested. If you want to read more about TDD I recommend this book (the only disadvantage is that examples are in Java).
Integration tests make sense if you don't use generic repository and you hide your queries in some class which will not expose IQueryable but returns directly data.
Example:
public interface IMyEntityRepository
{
MyEntity GetById(int id);
MyEntity GetByName(string name);
}
Now you can just write integration test to test implementation of this repository because queries are hidden in this class and not exposed to upper layers. But this type of repository is somehow considered as old implementation used with stored procedures. You will lose a lot of ORM features with this implementation or you will have to do a lot of additional work - for example add specification pattern to be able to define query in upper layer.
In ASP.NET MVC you can partially replace end-to-end tests with integration tests on controller level.
Edit based on comment:
I don't say that you need unit tests, integration tests and end-to-end tests. I say that making tested applications require much more effort. The amount and types of needed tests is dependent on the complexity of your application, expected future of the application, your skills and skills of other team members.
Small straighforward projects can be created without tests at all (ok, it is not a good idea but we all did it and at the end it worked) but once a project passes some treshold you can find that introducing new features or maintaining the project is very hard because you are never sure if it breaks something which already worked - that is called regression. The best defence against regression is good set of automated tests.
Unit tests help you to test method. Such tests should ideally cover all execution paths in the method. These tests should be very short and easy to write - to complicated part can be to set up dependencies (mocks, faktes, stubs).
Integration tests help you to test functionality accross multiple layers and usually accross multiple processes (application, database). You don't need to have them for everything, it is more about experience to select where they are helpful.
End-to-end tests are something like validation of use case / user story / feature. They should cover whole flow of the requirement.
It is not needed to test a feture multiple times - if you know that the feature is tested in end-to-end test you don't need to write integration test for the same code. Also if you know that method has only single execution path which is covered by integration test you don't need to write unit test for it. This works much better with TDD approach where you start with a big test (end-to-end or integration) and go deeper to unit tests.
Depending on your developement approach you don't have to start with multiple types of test from beginning but you can introduce them later as your application will become more complex. The exception is TDD/BDD where you should start to use at least end-to-end and unit tests before you even write single line of other code.
So you are asking the wrong question. The question is not what is simpler? The question is what will help you at the end and what complexity fits your application? If you want to have easily unit tested application and business logic you should wrap EF code to some other classes which can be mocked. But in the same time you must introduce other type of tests to ensure that EF code works.
I can't say you what approach will fit your environment / project / team / etc. But I can explain example from my past project:
I worked on the project for about 5-6 months with two collegues. The project was based on ASP.NET MVC 2 + jQuery + EFv4 and it was developed in incremental and iterative way. It had a lot of complicated business logic and a lot of complicated database queries. We started with generic repositories and high code coverage with unit tests + integration tests to validate mapping (simple tests for inserting, deleting, updating and selecting entity). After few months we found that our approach doesn't work. We had more then 1.200 unit tests, code coverage about 60% (that is not very good) and a lot of regression problems. Changing anything in EF model could introduce unexpected problems in parts which were not touched for several weeks. We found that we are missing integration tests or end-to-end tests for our application logic. The same conclusion was made on a parallel team worked on another project and using integration tests was considered as recommendation for new projects.
Does using repository pattern add complexity? In your scenario I don't think so. It makes TDD easier and your code more manageable. Try to use a Generic repository pattern for more separation and cleaner code.
If you want to find out more about TDD and design patterns in Entity Framework, take a look at: http://msdn.microsoft.com/en-us/ff714955.aspx
However it seems like you're looking for an approach to mock test Entity Framework. One solution would be using a virtual seed method to generate data on database initialization. Take a look at Seed section at: http://blogs.msdn.com/b/adonet/archive/2010/09/02/ef-feature-ctp4-dbcontext-and-databases.aspx
Also you can use some mocking frameworks. The most famous ones I know are:
Rhino Mocks
Moq
Typemock (Commercial)
To see a more complete list of .NET mocking frameworks, check out: https://stackoverflow.com/questions/37359/what-c-mocking-framework-to-use
Another approach would be to use an in-memory database provider like SQLite. Study more at Is there an in-memory provider for Entity Framework?
Finally, here are some good links about unit testing Entity Framework (Some links refer to Entity Framework 4.0. But you'll get the idea.):
http://social.msdn.microsoft.com/Forums/en/adodotnetentityframework/thread/678b5871-bec5-4640-a024-71bd4d5c77ff
http://mosesofegypt.net/post/Introducing-Entity-Framework-Unit-Testing-with-TypeMock-Isolator.aspx
What is the way to go to fake my database layer in a unit test?
What I do is I use a simple ISession and EFSession object, witch are easy to mock in my controller, easy to access with Linq and strongly typed. Inject with DI using Ninject.
public interface ISession : IDisposable
{
void CommitChanges();
void Delete<T>(Expression<Func<T, bool>> expression) where T : class, new();
void Delete<T>(T item) where T : class, new();
void DeleteAll<T>() where T : class, new();
T Single<T>(Expression<Func<T, bool>> expression) where T : class, new();
IQueryable<T> All<T>() where T : class, new();
void Add<T>(T item) where T : class, new();
void Add<T>(IEnumerable<T> items) where T : class, new();
void Update<T>(T item) where T : class, new();
}
public class EFSession : ISession
{
DbContext _context;
public EFSession(DbContext context)
{
_context = context;
}
public void CommitChanges()
{
_context.SaveChanges();
}
public void Delete<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new()
{
var query = All<T>().Where(expression);
foreach (var item in query)
{
Delete(item);
}
}
public void Delete<T>(T item) where T : class, new()
{
_context.Set<T>().Remove(item);
}
public void DeleteAll<T>() where T : class, new()
{
var query = All<T>();
foreach (var item in query)
{
Delete(item);
}
}
public void Dispose()
{
_context.Dispose();
}
public T Single<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression) where T : class, new()
{
return All<T>().FirstOrDefault(expression);
}
public IQueryable<T> All<T>() where T : class, new()
{
return _context.Set<T>().AsQueryable<T>();
}
public void Add<T>(T item) where T : class, new()
{
_context.Set<T>().Add(item);
}
public void Add<T>(IEnumerable<T> items) where T : class, new()
{
foreach (var item in items)
{
Add(item);
}
}
/// <summary>
/// Do not use this since we use EF4, just call CommitChanges() it does not do anything
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="item"></param>
public void Update<T>(T item) where T : class, new()
{
//nothing needed here
}
If I want to switch from EF4 to let's say MongoDB, I only have to make a MongoSession that implement ISession...
I was having the same problem deciding on the general design of my MVC application. This CodePlex project by Shiju Varghese was alot of help. It is done in ASP.net MVC3, EF CodeFirst and also utilizes a service layer and a repository layer as well. Dependency injection is done using Unity. It is simple and very easy to follow. It is also backed by some 4 very nice blog posts. Its worth checking out. And, don't give up on the repository..yet.
having
public ActionResult Create(CategoryViewModel viewModel)
{
if (!ModelState.IsValid)
{
return View(viewModel);
}
Category category = new Category();
category.Parent = daoTemplate.FindByID<Category>(viewModel.ParentId);
category.CopyFrom(viewModel);
daoTemplate.Save(category);
return RedirectToAction("Index");
}
I need to ensure that newly created category has correct parent set. How can I do this, if I have no access to the category object outside of the method?
Ultimately, the test you're proposing is really verifying two things:
1) daoTemplate.FindByID<T>() works as expected
2) The Create method calls daoTemplate.FindByID<T>()
Those should be two separate tests.
The first test should be part of a DaoTemplate fixture - apart from that it's difficult to comment on it without seeing more of the source code.
Second, to verify that the action calls the expected method, you'll need to hand-roll a mock object or use a mocking framework. There are numerous popular mocking frameworks for C# (Moq, RhinoMocks, even the venerable NMock2 - see the age-old stackoverflow question What C# mocking framework to use? for a start), and the classic place to get started mocking is Martin Fowler's article "Mocks aren't Stubs."
All the examples I've seen for overloading usually have only two methods of the same name with different parameters and one using the GET verb while the other uses POST. Is it possible to do two or more overloads on the same method, all with the same verb?
Here's an example of what I'm referring to: Can you overload controller methods in ASP.NET MVC?
I don't think you can overload the same action name with the one verb by default. As that other thread you point to says, you can overload the methods & then use an attribute to change the action that maps to the method, but I'm guessing that's not what you're looking for.
Another option that I've used before (depends on how complex/different your overloads are) is to simply use nullable values for the parameters & effectively merge your different signatures together. So instead of:
public ActionResult DoSomething(int id)...
public ActionResult DoSomething(string name)...
just have:
public ActionResult DoSomething(int? id, string? name)
Not the nicest solution, but if one overload just builds on another then its not too bad a compromise.
One final option that may be worth giving a go (I haven't tried it & don't even know if it'll work, but logically it should), is to write an implementation of the ActionMethodSelectorAttribute that compares the parameters passed in the ControllerContext to the method signature & tries to make a best match (i.e. try to resolve the ambiguity a bit more strictly than the default implementation).
I guess it is not. Since I found that the MVC framework didn't really care what you put in the parameter list, for example, my action is like:
public ActionResult Index(int id) {...}
It is ok to request like this: Domain.com/Index.aspx
or Domain.com/Index.aspx?id=012901
or even Domain.com/Index.aspx?login=938293
Since overloading in programming language means that you select different functions (with same name) using the input parameters, but MVC in this case didn't care about it! So other than ActionVerb overloading, I think it is not ok.