When should Asp.net MVC views be unit tested - asp.net-mvc

Suppose you have a project with many developers working on the same Asp.net MVC application. Project is a long run one, so team members will change over time. This means that during this time we get to a point where certain views seem (totally) different to others. Markup wise.
In order to keep the UI as consistent as possible over time let's say that initial team develops a sort of fluent Html helper methods, for creating generic view structure. Ie:
Html.CreateHeader("Header text")
.CreateForm(Url.Action(...))
.AddSegment("Person")
.EditorFor(model => model.Person)
.AddSegment("...")
.EditorFor(model => model.Company)
...
Although this does add an additional abstraction over everyday Asp.net MVC view code, but it makes views consistent over time. Of course when these fluent helpers are used.
But. Even though each of these helper methods can be unit tested I wonder what good it is to test views themselves?
Questions
My opinion is that you don't unit test anything static (ie. views). So unless views have rich client side functionality, nothing should be tested. But if it does, you would be testing Javascript code with ie. QUnit library. Is this thinking correct or does it really make sense to test something that is intrinsically static in nature and we'd only be testing presence of certain elements and their content?
What other aspects of views can be tested and why should they be tested?
During my development I've noticed that unit testing should only be done on non-trivial code. In other words methods that have at least one branch should be tested, and others that are long enough to not be completely obvious what they do even without branches.
I tend to avoid writing unit tests for methods that are trivial in nature, because they don't aid anything except consuming developer's time.

Views should only be presenting the data that is provided to them, therefore there shouldn't be any logic to test anyway.
In my opinion the "testing" of views comes under integration testing. (Checking everything looks ok, links go to the right places etc.)

Related

MV4 Application with EF5 model first, without ViewModels or Repositories

I'm building a MVC4 app, I've used EF5 model first, and kept it pretty simple. This isn't going to a huge application, there will only ever be 4 or 5 people on it at once and all users will be authenticated before being able to access any part of the application, it's very simply a place order - dispatcher sees order - dispatcher compeletes order sort of application.
Basically my question is do I need to be worrying about repositories and ViewModels if the size and scope of my application is so small. Any view that is strongly typed to a domain entity is using all of the properties within that entity. I'm using TryOrUpdateModel in my controllers and have read some things saying this can cause a lot of problems, but not a lot of information on exactly what those problems can be. I don't want to use an incredibly complicated pattern for a very simple app.
Hopefully I've given enough detail, if anyone wants to see my code just ask, I'm really at a roadblock here though, and could really use some advice from the community. Thanks so much!
ViewModels: Yes
I only see bad points when passing an EF Entities directly to a view:
You need to do manual whitelisting or blacklisting to prevent over-posting and mass assignment
It becomes very easy to accidentally lazy load extra data from your view, resulting in select N+1 problems
In my personal opinion, a model should closely resembly the information displayed on the view and in most cases (except for basic CRUD stuff), a view contains information from more than one Entity
Repositories: No
The Entity Framework DbContext already is an implementation of the Repository and Unit of Work patterns. If you want everything to be testable, just test against a separate database. If you want to make things loosely coupled, there are ways to do that with EF without using repositories too. To be honest, I really don't understand the popularity of custom repositories.
In my experience, the requirements on a software solution tend to evolve over time well beyond the initial requirement set.
By following architectural best practices now, you will be much better able to accommodate changes to the solution over its entire lifetime.
The Respository pattern and ViewModels are both powerful, and not very difficult or time consuming to implement. I would suggest using them even for small projects.
Yes, you still want to use a repository and view models. Both of these tools allow you to place code in one place instead of all over the place and will save you time. More than likely, it will save you copy paste errors too.
Moreover, having these tools in place will allow you to make expansions to the system easier in the future, instead of having to pour through all of the code which will have poor readability.
Separating your concerns will lead to less code overall, a more efficient system, and smaller controllers / code sections. View models and a repository are not heavily intrusive to implement. It is not like you are going to implement a controller factory or dependency injection.

Testing: how to focus on behavior instead of implementation without losing speed?

It seems, that there are two totally different approaches to testing, and I would like to cite both of them.
The thing is, that those opinions were stated 5 years ago (2007), and I am interested, what has changed since then and which way should I go.
Brandon Keepers:
The theory is that tests are supposed to be agnostic of the
implementation. This leads to less brittle tests and actually tests
the outcome (or behavior).
With RSpec, I feel like the common approach of completely mocking your
models to test your controllers ends up forcing you to look too much
into the implementation of your controller.
This by itself is not too bad, but the problem is that it peers too
much into the controller to dictate how the model is used. Why does it
matter if my controller calls Thing.new? What if my controller decides
to take the Thing.create! and rescue route? What if my model has a
special initializer method, like Thing.build_with_foo? My spec for
behavior should not fail if I change the implementation.
This problem gets even worse when you have nested resources and are
creating multiple models per controller. Some of my setup methods end
up being 15 or more lines long and VERY fragile.
RSpec’s intention is to completely isolate your controller logic from
your models, which sounds good in theory, but almost runs against the
grain for an integrated stack like Rails. Especially if you practice
the skinny controller/fat model discipline, the amount of logic in the
controller becomes very small, and the setup becomes huge.
So what’s a BDD-wannabe to do? Taking a step back, the behavior that I
really want to test is not that my controller calls Thing.new, but
that given parameters X, it creates a new thing and redirects to it.
David Chelimsky:
It’s all about trade-offs.
The fact that AR chooses inheritance rather than delegation puts us in
a testing bind – we have to be coupled to the database OR we have to
be more intimate with the implementation. We accept this design choice
because we reap benefits in expressiveness and DRY-ness.
In grappling with the dilemma, I chose faster tests at the cost of
slightly more brittle. You’re choosing less brittle tests at the cost
of them running slightly slower. It’s a trade-off either way.
In practice, I run the tests hundreds, if not thousands, of times a
day (I use autotest and take very granular steps) and I change whether
I use “new” or “create” almost never. Also due to granular steps, new
models that appear are quite volatile at first. The valid_thing_attrs
approach minimizes the pain from this a bit, but it still means that
every new required field means that I have to change
valid_thing_attrs.
But if your approach is working for you in practice, then its good! In
fact, I’d strongly recommend that you publish a plugin with generators
that produce the examples the way you like them. I’m sure that a lot
of people would benefit from that.
Ryan Bates:
Out of curiosity, how often do you use mocks in your tests/specs?
Perhaps I'm doing something wrong, but I'm finding it severely
limiting. Since switching to rSpec over a month ago, I've been doing
what they recommend in the docs where the controller and view layers
do not hit the database at all and the models are completely mocked
out. This gives you a nice speed boost and makes some things easier,
but I'm finding the cons of doing this far outweigh the pros. Since
using mocks, my specs have turned into a maintenance nightmare. Specs
are meant to test the behavior, not the implementation. I don't care
if a method was called I just want to make sure the resulting output
is correct. Because mocking makes specs picky about the
implementation, it makes simple refactorings (that don't change the
behavior) impossible to do without having to constantly go back and
"fix" the specs. I'm very opinionated about what a spec/tests should
cover. A test should only break when the app breaks. This is one
reason why I hardly test the view layer because I find it too rigid.
It often leads to tests breaking without the app breaking when
changing little things in the view. I'm finding the same problem with
mocks. On top of all this, I just realized today that mocking/stubbing
a class method (sometimes) sticks around between specs. Specs should
be self contained and not influenced by other specs. This breaks that
rule and leads to tricky bugs. What have I learned from all this? Be
careful where you use mocking. Stubbing is not as bad, but still has
some of the same issues.
I took the past few hours and removed nearly all mocks from my specs.
I also merged the controller and view specs into one using
"integrate_views" in the controller spec. I am also loading all
fixtures for each controller spec so there's some test data to fill
the views. The end result? My specs are shorter, simpler, more
consistent, less rigid, and they test the entire stack together
(model, view, controller) so no bugs can slip through the cracks. I'm
not saying this is the "right" way for everyone. If your project
requires a very strict spec case then it may not be for you, but in my
case this is worlds better than what I had before using mocks. I still
think stubbing is a good solution in a few spots so I'm still doing
that.
I think all three opinions are still completely valid. Ryan and I were struggling with the maintainability of mocking, while David felt the maintenance tradeoff was worth it for the increase in speed.
But these tradeoffs are symptoms of a deeper problem, which David alluded to in 2007: ActiveRecord. The design of ActiveRecord encourages you to create god objects that do too much, know too much about the rest of the system, and have too much surface area. This leads to tests that have too much to test, know too much about the rest of the system, and are either too slow or brittle.
So what's the solution? Separate as much of your application from the framework as possible. Write lots of small classes that model your domain and don't inherit from anything. Each object should have limited surface area (no more than a few methods) and explicit dependencies passed in through the constructor.
With this approach, I've only been writing two types of tests: isolated unit tests, and full-stack system tests. In the isolation tests, I mock or stub everything that is not the object under test. These tests are insanely fast and often don't even require loading the whole Rails environment. The full stack tests exercise the whole system. They are painfully slow and give useless feedback when they fail. I write as few as necessary, but enough to give me confidence that all my well-tested objects integrate well.
Unfortunately, I can't point you to an example project that does this well (yet). I talk a little about it in my presentation on Why Our Code Smells, watch Corey Haines' presentation on Fast Rails Tests, and I highly recommend reading Growing Object Oriented Software Guided by Tests.
Thanks for compiling the quotes from 2007. It is fun to look back.
My current testing approach is covered in this RailsCasts episode which I have been quite happy with. In summary I have two levels of tests.
High level: I use request specs in RSpec, Capybara, and VCR. Tests can be flagged to execute JavaScript as necessary. Mocking is avoided here because the goal is to test the entire stack. Each controller action is tested at least once, maybe a few times.
Low level: This is where all complex logic is tested - primarily models and helpers. I avoid mocking here as well. The tests hit the database or surrounding objects when necessary.
Notice there are no controller or view specs. I feel these are adequately covered in request specs.
Since there is little mocking, how do I keep the tests fast? Here are some tips.
Avoid excessive branching logic in the high level tests. Any complex logic should be moved to the lower level.
When generating records (such as with Factory Girl), use build first and only switch to create when necessary.
Use Guard with Spork to skip the Rails startup time. The relevant tests are often done within a few seconds after saving the file. Use a :focus tag in RSpec to limit which tests run when working on a specific area. If it's a large test suite, set all_after_pass: false, all_on_start: false in the Guardfile to only run them all when needed.
I use multiple assertions per test. Executing the same setup code for each assertion will greatly increase the test time. RSpec will print out the line that failed so it is easy to locate it.
I find mocking adds brittleness to the tests which is why I avoid it. True, it can be great as an aid for OO design, but in the structure of a Rails app this doesn't feel as effective. Instead I rely heavily on refactoring and let the code itself tell me how the design should go.
This approach works best on small-medium size Rails applications without extensive, complex domain logic.
Great questions and great discussion. #ryanb and #bkeepers mention that they only write two types of tests. I take a similar approach, but have a third type of test:
Unit tests: isolated tests, typically, but not always, against plain ruby objects. My unit tests don't involve the DB, 3rd party API calls, or any other external stuff.
Integration tests: these are still focused on testing one class; the differences is that they integrate that class with the external stuff I avoid in my unit tests. My models will often have both unit tests and integration tests, where the unit tests focus in the pure logic that can be tested w/o involving the DB, and the integration tests will involve the DB. In addition, I tend to test 3rd party API wrappers with integration tests, using VCR to keep the tests fast and deterministic, but letting my CI builds make the HTTP requests for real (to catch any API changes).
Acceptance tests: end-to-end tests, for an entire feature. This isn't just about UI testing via capybara; I do the same in my gems, which may not have an HTML UI at all. In those cases, this exercises whatever the gem does end-to-end. I also tend to use VCR in these tests (if they make external HTTP requests), and like in my integration tests, my CI build is setup to make the HTTP requests for real.
As far as mocking goes, I don't have a "one size fits all" approach. I've definitely overmocked in the past, but I still find it to be a very useful technique, especially when using something like rspec-fire. In general, I mock collaborators playing roles freely (particularly if I own them, and they are service objects) and try to avoid it in most other cases.
Probably the biggest change to my testing over the last year or so has been inspired by DAS: whereas I used to have a spec_helper.rb that loads the entire environment, now I explicitly load just the class-under test (and any dependencies). Besides the improved test speed (which does make a huge difference!) it helps me identify when my class-under-test is pulling in too many dependencies.

Separate the controllers in a new project? Is this a good design?

My colleague told me to separate my controllers in a separate project to make the unit testing as easy as possible, and he also told me to create a solution for the controllers project and test project to avoid loading the whole application when conducting unit testing. Is it a good approach to separate the controllers in a new project?
I am not sure if there is a simple yes or no answer to this question. I would think that your project would have to be very, very large as to have impact on your unit testing. My personal opinion is to leave the controllers in the web project along with the views and view models. However, I am a fan of moving the models to a separate project. My reasons for doing so have less to do with easier unit testing but rather reusing the data access (models) in other applications.
In my opinion you should always at least separate the controllers from your view project (usually a web project for me), because the idea is that the controllers should be able to be used with any view (maybe later you decide to use them for a Windows Forms project, for example). It keeps the namespaces a bit cleaner as well.
From my point of view, making you move out the controllers to a separate project has two things to consider, if you do so, then it enforces you to think how to solve problems with low coupling and precisely low coupled classes can be tested more easily than tight coupled classes.
On the other hand, having the controllers in the same project than the views is kind of logical because the controllers normally know about the views.
If you think of reusability there may be something arguable here because often controllers are "glue" components this means, there is a lot of wiring in them.
This seems like a good idea at first. Creating a prensentation layer in the middle and keeping your MVC project containing only views, making it truly a UI project. On the other hand, you will probably lose the tooling support for the views. Since you have to ignore all the warnings, you have to make sure all views are there, strongly typed to your object.
I don't understand the concern for referencing your MVC project in your test suite since you will probably bring in the MVC namespace anyway.

Why should we use coded ui when we have Specflow?

We have utilized Specflow and WatIn for acceptance tests at my current project. The customer wants us to use Microsoft coded-ui instead. I have never tested coded ui, but from what I've seen so far it looks cumbersome. I want to specify my acceptance tests up front, before I have a ui, not as a result of some record/playback stuff. Anyway, can someone please tell me why we should throw away the Specflow/watin combo and replace it with coded ui?
I've also read that you can combine specflow with coded ui, but it looks like a lot of overhead for something which I am already doing fine in specflow.
I wrote a blog post on how to do this you might find useful
http://rburnham.wordpress.com/2011/03/15/bdd-ui-automation-with-specflow-and-coded-ui-tests/
The pro's and con's of Coded UI Test that i can think of is your testing the application exactly how the user will be using it. This is good for acceptance test but it also has its limitations. Its also really good for end to end testing. In the past UI Tests have been know to be fragile. For example when MS created the VS2010 UI almost all of the UI tests broke. The main reason being is the technology change. Coded UI tests do help to limit this from happening by the way it matches a control. It uses more of a probability based match. This mean it will try to find the best match based on the information it has such as control name. For us Coded UI tests was our choice because of technology limitations. Our Legacy app is VB and although CUIT does not work great, i'm in the progress of writing an extension to get better control information, it was still our only choice. Also keep in mind CUIT is new and has its own limitations. You should be prepared to be very structured in the way you lay out your project as maintaining your UIMaps can be a bit of manual work due to the current end to end behaviour in VS2010, for example creating a CUIT from an existing action recording always places the test in a UIMap called UIMap.uitest and there is no way to change that or transfer to another UIMap. If you use multiple ui maps this means you will need to record your steps first and then use them in your test. However being in .net it its still very flexible.
By far the best thing about specflow is its gerkin syntax for readability and living documentation. Normally your testing features or behaviours of your app which is where the value comes from It generally aims the test just below the UI. There is a little less chance of the test breaking when the UI changes here but there. Specflow to me is great when your application is under constant change and you want to ensure existing features remain working. It fits well in a Scrum environment as well where you can write your scenario's as a description about how it should work. One limitation to specflow i can see is its open for interpretation. Because of this it can be easy to write a test that is not very reusable and hard to maintain. I like to use more generic terms to describe my steps like "Log in as User1" instead of "Go to Login Page, Enter Username and Password, Click login". Describing it more granular makes it harder to reuse tightly couples it to the UI. How the login actually work should be up to the code behind not the specflow feature.
Combining the 2 however to us seems more beneficial than just using Coded UI Tests. If we decide to completely change the UI we would at least have the behaviours that are expected stored in our specflow features in a way anyone can understand. In the end you need to consider how the application will evolve and the type of application.

How do you plan your Rails app?

I'm starting a Rails app for a customer and am considering either creating a mind map or jumping straight to a Cucumber specification.
How do you plan your Rails app?
As an additional question, say you also start with Cucumber, at which point would you write Unit tests? Before satisfying the specifications?
I've got a 6 step process.
I prefer to work out the model relationship and uses before doing anything. Generally I try to define models into units containing coherent chunks of information. Usually this starts by identifying the orthogonal resources my application will need (Users, Posts, etc). I then figure out what information each of those resources absolutely need (attributes) and may potentially need (associations), and how that information will likely be operated on (methods), from there I define a set of rules to govern resource consistency (validations).
I usually iterate over my design a few times because the act of defining other models usually makes me rethink ones I've already done. Once I have a model design I like, I will start refactoring or specializing(subclassing) models to clarify the design.
I write the migration and make skeletons for my models. I usually won't write tests until I have a first draft of methods and validations implemented. It's not always obvious how to implement things until giving it some moderate thought.
Next comes the test suite. Doesn't matter what I used to write the tests, so long as I can be certain the backend is sane.
This is when I piece together the control flow. What happens on a successful request? Unsuccessful request? Which controller actions will link to others? Usually there is a 1-1 mapping between controllers and models (not counting sub classes of models), every so often I'll encounter situations where I need to act on multiple model types, for that I'll probably create a new controller. Depending on how complex my app is I may model the flow as a state machine.
Lastly I create the views. I start by sketching out the UI based which is heavily influenced by my model's relationships and attributes. Abstract out common parts, then write the views.
Polish the UI. I create a CSS, and start to replace links with remote calls, or even just javascript when appropriate.
I may interleave steps 2 and 3. I find it's very easy to write a test just after I write the code to be tested. Especially because I'm usually testing things in a console as I write, and half the test is written by pasting from the console.
I may also compartmentalize steps 4 and 5 for each model/controller. Any point I may go back and revise, a previous decision, and propagate those changes through my steps.
I start with sketches of the user interface and then progress to HTML mockups. Once the UI design is finalised I can identify the RESTful resources in the application and their relationships.
I don't think writing only cucumber features as specifications is a good idea. Writing test code without be able to test it pass leads to errors in the tests and increases the time you'll need to correct them later.
So I'd do the following :
Write some mindmap. But keep it simple with the major ideas of the project.
Start writing tests and coding at the sime time (write one test, make it pass, write an other, ...).
So you'll write your specifications while driving your application. Keeping it clean but also remaining agile and being able to change some ideas in the middle of the project.

Resources