I have read in VIPER blogs that moving view controller's code to Presenter codes makes it easy to unit test. The reason given in the blogs was that the Presenter doesn't have any UIKit related code in it.
How does this make it easier to unit test. Can any one please explain this in detail? Or is there any other advantage of this apart from avoiding Massive View Controller problem?
The biggest problem in unit testing is how to mock something. You want to test a method but that method is calling 3 other methods and you don't want to test these 3 methods, therefore you want to mock them to return some fixed value.
This is pretty easy in languages like Javascript, where you can substitute a method on any object or in Objective-C where you can do the same (although with a bit more difficulty).
This is not easy in a language like Swift. Therefore Viper came with the idea to split view controller into units (e.g. Presenter, Interactor, View, Router) and every unit has its own protocol. Now to mock one the units you can just implement the protocol and use it instead of the real Presenter or View.
(you can actually use some tools to generate the mocks for you dynamically in tests)
That makes unit testing much much easier.
However note that unit testing UI is never easy. UI usually operates in terms that are difficult to unit test and unit testing UI almost always means that you will be duplicating a lot of your app code in unit tests. UI is more commonly tested via integration tests (e.g. automatic clicking and validating what is visible on the screen).
Viper is not a bad architecture. Separation of concerns is something that many programmers struggle with and it's not a bad idea to have strict architectural rules. With complex screens you still won't be able to avoid big controllers but at least you will be forced to move some code out of the controller.
Massive View Controllers are not a problem of the MVC pattern. They are a problem of bad separation of concerns and strict rules in Viper help to avoid that.
It seems, that there are two totally different approaches to testing, and I would like to cite both of them.
The thing is, that those opinions were stated 5 years ago (2007), and I am interested, what has changed since then and which way should I go.
Brandon Keepers:
The theory is that tests are supposed to be agnostic of the
implementation. This leads to less brittle tests and actually tests
the outcome (or behavior).
With RSpec, I feel like the common approach of completely mocking your
models to test your controllers ends up forcing you to look too much
into the implementation of your controller.
This by itself is not too bad, but the problem is that it peers too
much into the controller to dictate how the model is used. Why does it
matter if my controller calls Thing.new? What if my controller decides
to take the Thing.create! and rescue route? What if my model has a
special initializer method, like Thing.build_with_foo? My spec for
behavior should not fail if I change the implementation.
This problem gets even worse when you have nested resources and are
creating multiple models per controller. Some of my setup methods end
up being 15 or more lines long and VERY fragile.
RSpec’s intention is to completely isolate your controller logic from
your models, which sounds good in theory, but almost runs against the
grain for an integrated stack like Rails. Especially if you practice
the skinny controller/fat model discipline, the amount of logic in the
controller becomes very small, and the setup becomes huge.
So what’s a BDD-wannabe to do? Taking a step back, the behavior that I
really want to test is not that my controller calls Thing.new, but
that given parameters X, it creates a new thing and redirects to it.
David Chelimsky:
It’s all about trade-offs.
The fact that AR chooses inheritance rather than delegation puts us in
a testing bind – we have to be coupled to the database OR we have to
be more intimate with the implementation. We accept this design choice
because we reap benefits in expressiveness and DRY-ness.
In grappling with the dilemma, I chose faster tests at the cost of
slightly more brittle. You’re choosing less brittle tests at the cost
of them running slightly slower. It’s a trade-off either way.
In practice, I run the tests hundreds, if not thousands, of times a
day (I use autotest and take very granular steps) and I change whether
I use “new” or “create” almost never. Also due to granular steps, new
models that appear are quite volatile at first. The valid_thing_attrs
approach minimizes the pain from this a bit, but it still means that
every new required field means that I have to change
valid_thing_attrs.
But if your approach is working for you in practice, then its good! In
fact, I’d strongly recommend that you publish a plugin with generators
that produce the examples the way you like them. I’m sure that a lot
of people would benefit from that.
Ryan Bates:
Out of curiosity, how often do you use mocks in your tests/specs?
Perhaps I'm doing something wrong, but I'm finding it severely
limiting. Since switching to rSpec over a month ago, I've been doing
what they recommend in the docs where the controller and view layers
do not hit the database at all and the models are completely mocked
out. This gives you a nice speed boost and makes some things easier,
but I'm finding the cons of doing this far outweigh the pros. Since
using mocks, my specs have turned into a maintenance nightmare. Specs
are meant to test the behavior, not the implementation. I don't care
if a method was called I just want to make sure the resulting output
is correct. Because mocking makes specs picky about the
implementation, it makes simple refactorings (that don't change the
behavior) impossible to do without having to constantly go back and
"fix" the specs. I'm very opinionated about what a spec/tests should
cover. A test should only break when the app breaks. This is one
reason why I hardly test the view layer because I find it too rigid.
It often leads to tests breaking without the app breaking when
changing little things in the view. I'm finding the same problem with
mocks. On top of all this, I just realized today that mocking/stubbing
a class method (sometimes) sticks around between specs. Specs should
be self contained and not influenced by other specs. This breaks that
rule and leads to tricky bugs. What have I learned from all this? Be
careful where you use mocking. Stubbing is not as bad, but still has
some of the same issues.
I took the past few hours and removed nearly all mocks from my specs.
I also merged the controller and view specs into one using
"integrate_views" in the controller spec. I am also loading all
fixtures for each controller spec so there's some test data to fill
the views. The end result? My specs are shorter, simpler, more
consistent, less rigid, and they test the entire stack together
(model, view, controller) so no bugs can slip through the cracks. I'm
not saying this is the "right" way for everyone. If your project
requires a very strict spec case then it may not be for you, but in my
case this is worlds better than what I had before using mocks. I still
think stubbing is a good solution in a few spots so I'm still doing
that.
I think all three opinions are still completely valid. Ryan and I were struggling with the maintainability of mocking, while David felt the maintenance tradeoff was worth it for the increase in speed.
But these tradeoffs are symptoms of a deeper problem, which David alluded to in 2007: ActiveRecord. The design of ActiveRecord encourages you to create god objects that do too much, know too much about the rest of the system, and have too much surface area. This leads to tests that have too much to test, know too much about the rest of the system, and are either too slow or brittle.
So what's the solution? Separate as much of your application from the framework as possible. Write lots of small classes that model your domain and don't inherit from anything. Each object should have limited surface area (no more than a few methods) and explicit dependencies passed in through the constructor.
With this approach, I've only been writing two types of tests: isolated unit tests, and full-stack system tests. In the isolation tests, I mock or stub everything that is not the object under test. These tests are insanely fast and often don't even require loading the whole Rails environment. The full stack tests exercise the whole system. They are painfully slow and give useless feedback when they fail. I write as few as necessary, but enough to give me confidence that all my well-tested objects integrate well.
Unfortunately, I can't point you to an example project that does this well (yet). I talk a little about it in my presentation on Why Our Code Smells, watch Corey Haines' presentation on Fast Rails Tests, and I highly recommend reading Growing Object Oriented Software Guided by Tests.
Thanks for compiling the quotes from 2007. It is fun to look back.
My current testing approach is covered in this RailsCasts episode which I have been quite happy with. In summary I have two levels of tests.
High level: I use request specs in RSpec, Capybara, and VCR. Tests can be flagged to execute JavaScript as necessary. Mocking is avoided here because the goal is to test the entire stack. Each controller action is tested at least once, maybe a few times.
Low level: This is where all complex logic is tested - primarily models and helpers. I avoid mocking here as well. The tests hit the database or surrounding objects when necessary.
Notice there are no controller or view specs. I feel these are adequately covered in request specs.
Since there is little mocking, how do I keep the tests fast? Here are some tips.
Avoid excessive branching logic in the high level tests. Any complex logic should be moved to the lower level.
When generating records (such as with Factory Girl), use build first and only switch to create when necessary.
Use Guard with Spork to skip the Rails startup time. The relevant tests are often done within a few seconds after saving the file. Use a :focus tag in RSpec to limit which tests run when working on a specific area. If it's a large test suite, set all_after_pass: false, all_on_start: false in the Guardfile to only run them all when needed.
I use multiple assertions per test. Executing the same setup code for each assertion will greatly increase the test time. RSpec will print out the line that failed so it is easy to locate it.
I find mocking adds brittleness to the tests which is why I avoid it. True, it can be great as an aid for OO design, but in the structure of a Rails app this doesn't feel as effective. Instead I rely heavily on refactoring and let the code itself tell me how the design should go.
This approach works best on small-medium size Rails applications without extensive, complex domain logic.
Great questions and great discussion. #ryanb and #bkeepers mention that they only write two types of tests. I take a similar approach, but have a third type of test:
Unit tests: isolated tests, typically, but not always, against plain ruby objects. My unit tests don't involve the DB, 3rd party API calls, or any other external stuff.
Integration tests: these are still focused on testing one class; the differences is that they integrate that class with the external stuff I avoid in my unit tests. My models will often have both unit tests and integration tests, where the unit tests focus in the pure logic that can be tested w/o involving the DB, and the integration tests will involve the DB. In addition, I tend to test 3rd party API wrappers with integration tests, using VCR to keep the tests fast and deterministic, but letting my CI builds make the HTTP requests for real (to catch any API changes).
Acceptance tests: end-to-end tests, for an entire feature. This isn't just about UI testing via capybara; I do the same in my gems, which may not have an HTML UI at all. In those cases, this exercises whatever the gem does end-to-end. I also tend to use VCR in these tests (if they make external HTTP requests), and like in my integration tests, my CI build is setup to make the HTTP requests for real.
As far as mocking goes, I don't have a "one size fits all" approach. I've definitely overmocked in the past, but I still find it to be a very useful technique, especially when using something like rspec-fire. In general, I mock collaborators playing roles freely (particularly if I own them, and they are service objects) and try to avoid it in most other cases.
Probably the biggest change to my testing over the last year or so has been inspired by DAS: whereas I used to have a spec_helper.rb that loads the entire environment, now I explicitly load just the class-under test (and any dependencies). Besides the improved test speed (which does make a huge difference!) it helps me identify when my class-under-test is pulling in too many dependencies.
I'm just curious where people tend to use FactoryGirl.build_stubbed and where they use double when writing RSpec specs. That is, are there best practices like "only use FactoryGirl methods in their corresponding model specs?"
Is it a code smell when you find yourself using FactoryGirl.create(:foo) in spec/models/bar_spec.rb?
Is it less of a code smell if you're using FactoryGirl.build_stubbed(:foo) in spec/models/bar_spec.rb?
Is it a code smell if you're using FactoryGirl.create(:foo) in foos_controller_spec.rb?
Is it less of a code smell if you're using FactoryGirl.build_stubbed(:foo) in foos_controller_spec.rb?
Is it a code smell if you're using FactoryGirl.build_stubbed(:foo) in spec/decorators/foo_decorator_spec.rb?
Sorry for so many questions! I just would love to know how other people draw the lines in unit test isolation and object oriented design best practices.
Thanks!
I believe that there are best practices that guide us to think about when to use mocks (in this case "doubles") versus integrating against real dependencies (in this case "Factories"). There is a really good book on testing (caveat: it uses Java examples) that describes the purpose of test-driven development, and I think it's very helpful in this discussion on testing in Rails applications. It describes the intention of testing as follows:
... our intention in test-driven development is to use mock objects to bring out relationships between objects.
Freeman, Steve; Pryce, Nat (2009-10-12). Growing Object-Oriented Software, Guided by Tests (Kindle Locations 3878-3879). Pearson Education (USA). Kindle Edition.
If we think about this emphasis on using test-driven development not only to prevent us from introducing regressions, but to help us think about how our code is structured in terms of its interface and relationships to other objects we will naturally use mocks in many cases. I'll describe how this applies to your specific questions below.
First, in terms of whether we use mock objects or real dependencies in model tests - if we're testing class Foo and its dependency on Bar, we may want to substitute a mock for Bar. In this way we will see clearly the level of coupling to Bar as we'll have to mock the methods that will be called on it. If we find that our mock of Bar is complex, it's a sign that perhaps we should refactor Foo and Bar so that they are less coupled to one another.
In the sense that both Factory.create and Factory.build_stubbed have the same effect of keeping you from making dependencies on related classes explicit, I think they're both about as smelly, with Factory.create being the slower of the two options.
In my tests I tend to not worry too much about mocking external dependencies in controllers. I know that this is slower to run than fully mocking, and you don't have the benefit of making the controller coupling to the model explicit, but it's quicker to write the test, and I'm not generally as worried about making clear the relationship between controllers and the persisted records that they manage. As long as you follow patterns of "skinny controllers" there shouldn't be too logic to worry about here anyway. If we need to specify a level of "test smell" here I would say that it's a bit less smelly than model tests that depend on other factories.
I would tend to worry least about Decorators that depend on factories of the classes that they decorate. This is because by definition the decorator should maintain the same interface as the class they decorate. Decoration is most often implemented with some form of inheritance, whether by using method_missing to delegate to the decoratee, or by explicit subclassing of the decoratee. Because of this you're breaking other rules of good object-oriented programming like Liskov Substitution if the decorator deviates too much from the interface of the thing it decorates. As long as your decoration isn't implemented poorly by breaking rules of good inheritance, the coupling to the class that you decorate is already present, so it's not making things much worse if you have the test of the decoratee depend on a persisted or stubbed factory of the thing that it decorates. You can go crazy with factories in decorator tests and it doesn't matter IMO.
I think that it's important to note that even if you prefer mocks in most cases you should still have some integration tests that use real dependencies. You'll just find that these cover specific high-value cases, where the isolated unit tests provide more coverage of functionality provided by your classes.
At any rate I break all of the above rules sometimes and they are just some guidelines that I use in writing tests. I'm looking forward to hearing how others are using Factories (build_stubbed and really persisted) versus mock objects (doubles) in their tests.
I started with MVC quite recently because I heard that the major advantage of MVC is that it makes the application unit testable. After writing first unit tests I saw that it is not always simple to test controllers that have a lot of logic inside (send confirmation emails, use Session, context and other ASP Net statics). It takes me more time to write the unit test than the functionality and I am not convinced that this is useful.
I am tempted to move the business logic into a "Service" layer that eliminates of all ASP Net statics and which can be easily tested. Then to use Selenium for integration tests in order to test the whole functionality.
Did you got into the situation when testing an action is very complex (especially mocking the input and setting up environment)?
Did you find a good approach to have business logic in controllers. Or you found it better to use services and controllers code just relay on services calls?
In my opinion testing a controller is more equivalent to integration tests than to unit tests. What do you think about this?
Do you think that unit testing controllers has any advantage over integration tests?
I am tempted to move the business logic into a "Service" layer that
eliminates of all ASP Net statics and which can be easily tested. Then
to use Selenium for integration tests in order to test the whole
functionality.
This pretty much right here. If your controllers are complex then they need to be refactored. They shouldn't have any business logic at all. You can using a Mock framework to mock the service layer and test your controllers easily that way.
In my opinion testing a controller is more equivalent to integration
tests than to unit tests. What do you think about this?
I disagree with this. You are testing your controller to make sure it returns the appropriate response based on the input you give it. Supply an id that doesn't exist? Redirects to another page or returns a NotFound view. Model State is invalid? Returns the same view again, etc.
Did you got into the situation when testing an action is very complex (especially mocking the input and setting up environment)?
This happens when your controllers have lot of dependencies and they are tightly wired to them. Unless it is an existing code and bringing the changes to code creates more trouble you should loosely couple the dependencies through interfaces or abstract classes and that makes unit testable so easy. You should even use wrappers around Session, Cache and like objects.
As #Dismissile suggests that first you have to re-factor your controllers and then unit testing will be easy.
Did you find a good approach to have business logic in controllers. Or you found it better to use services and controllers code just relay on services calls?
Controllers are not the place to put business logic. All the business logic should be in the Model classes. The whole responsibility of the controller is to talk to the model and return a view, json or whatever back to the client. If you have complex business logic in the controllers you should move them to model classes.
Simply you should think about "Dump Views.. Thin Controllers.. Fat Models"!
In my opinion testing a controller is more equivalent to integration tests than to unit tests. What do you think about this?
Integration testing is totally different from Unit testing. In integration testing you have to setup the application and run the test cases against it. Here you are testing the behavior of the total application in every test scenario and not a single unit. Unit testing is all about testing the functionalities of methods in a class. Testing a class or method in unit testing should be independent of other classes or methods.
But the thing is when designing an application unit testing should be kept in mind else unit testing will become as difficult as integration testing and of course it's not unit testing at all.
Do you think that unit testing controllers has any advantage over integration tests?
Finding and fixing errors at unit level is so easy compared to the system level. So the answer is yes.
I think in your case you have an application that has controllers does more than what they have to do. So if you are thinking about unit testing so serious then you have to re-factor and loosely couple the dependencies wherever you need else there is no much gain in writing unit tests at all.
Things started off simple with my fake repositories that contained hard-coded lists of entities.
As I have progressed, my shared fake repositories have become bloated. I am continually adding new properties and new entities to these lists. This is making it extremely difficult to maintain and it is also difficult to see what the test is doing. I believe this is an anti-pattern called "General Fixture".
In researching ASP.NET MVC unit tests, I have seen two methods for preparing repository fixtures that are passed on to the controllers.
Create hard-coded fake repositories that are shared among all tests
Mock parts of the repositories within each test
I'm tempted to explore option #2 above but I've read that it's not a good idea to mock repositories and it seems quite daunting in the scenarios where I'm testing a controller that operates on collections (i.e. with paging/sorting/filtering capabilities).
My question to the community...
What methods for preparing repository fixtures work well beyond rudimentary examples?
I dont think you should only be choosing one of the two options. There are cases when using a fake repository would be better, and there are cases when mocking would be better. I think you should assess what you need on a case by case basis. For example, if you are writing a test for a UsersService that needs to call an IUserRepository.DoesUserExist() that returns a boolean, then you wouldnt use a fake repository, its easier just to Mock a call to return true or false.
Moq is awesome.
For a similar reason on a new project I'm looking into using an ORM (NHibernate in my case). That way I can point it at an "in-memory" SQLLite instance (rather than SQL Server) and it should be far easier to set up / maintain (I hope). That way I will only need to mock the repository if I have a requirement to test particular scenarios (such as time-outs, etc)
If you are using your Unit Tests for TDD, download Rhino Mocks and use optione #2.
For the most part, we go with test specific repository mocks. I've never seen advice not to do this myself and I find that it works great. For the most part, our repository methods and therefore our mocks only return single models or lists of models (not data contexts) so it is easy to create the data specific for each test and isolated to each query. This means that we can mock whatever data we like without affecting other tests or queries in the same test. It is very easy to see why the data was created and what it is testing.
I have been on teams have also decided to create shared mock data from time to time as well. I think the decision was generally made because the routines generated dynamic queries and the data required to mock all of the tests resulted in a good portion of the database being duplicated. However, in retrospect, I probably would have suggested that only the resulting queries need to be checked, not the contents returned from the database. And thus, no data at all would be mocked though, it would have required some code changes. I only mention this to illustrate that if you can't see to find a way to make option 2 work, maybe there is a way to refactor the code to make it more testable.