Controller tests bleeding to models? - ruby-on-rails

I am writing some tests with RSpec (tests and not specs, the code was untested until now) and have stumbled upon an uncertainty...
I want to know whether a controller is calling the model's methods properly and I am divided between the possibilities:
test the controller with stubbing the model method (I won't know if the model method actually exists or accepts the arguments given)
leave the model method unstubbed and risk having my controller tests bleed into model test territory (and also make them slow cause of DB access and costly methods)
write multiple controller tests, each of them leaving unstubbed one model method (still slow as hell but at least it's verbose)
Is there a correct answer on this?

You could stub the model method if you want, but in general you shouldn't check in controller test that particular method of a model was called you should check controller's response content. Don't forget about black box metaphor.

I suggest you test your controllers without stubbing your models. Do not care about the speed of the tests when it hits the database. I assume you want to have the database also tested, and having a correct program is more important than the speed of your tests, isn't it?
Consider the functional tests as another layer around your unit tests, not as something that is isolated from your models. Your unit tests (models) ensure that some model methods work as expected, and then your controller tests ensure that the controller is able to use these methods, and they work as the controller expects.
As iafonov said, do not focus on the model's methods in your controller tests. Assume that if your controller is able to give you the correct response, then your model apparently works as expected.
Of course, some people have different point of view. I do not claim that my suggestion is the best. It just works for me, and I consider it being right. A lot of people suggest that you should test your controller in isolation from models, but how do you ensure then that there is no discrepancy between your stubs and your real implementation?

I'm pretty late to the party. But agree strongly with solnic and disagree with Arsen7.
Think about it:
If you are using vanilla active record methods, e.g. MyModel.find_by_id(123) you can safely stub that because AR is already well tested, no need to his the database for those.
If you are calling a custom method you defined on the model, e.g. MyModel.foo(param1, param2) then you should still mock/stub it because you should have a test for it in your MyModel spec.
The only downside to stubbing model methods is that sometimes if you change the interface for a method your controller will be ignorant of that change and the test will still pass. Typically either integration or manual tests will uncover the problem. If you are working on a large project speed quickly becomes an issue and avoiding the perf hit from interacting with the database is more than worth an occasional head scratch imho.

With good model/unit tests it's recommended to stub models in controller specs (and it is obviously recommended to have good model/unit specs heh). Full stack should be covered by requests/acceptance specs anyway. I like to treat controller specs as 'unit specs' for controllers. With skinny controllers stubbing model in specs should be easy and should not touch any implementation details.

Related

Rails and Testing, why testing controllers is not enough?

I was wondering, as testing a controller in rails run the associated views (even if not shown) and integrate many models concerns (by saving, updating,...), testing controller should be almost enough for all applications near enough of CRUD classical architecture. Am I wrong?
Furthermore, views can be tested in the browser, as eyes can be quicker to check than describing everything in a test (and they can achieve CSS control too.)
Thank you for your point of view!
PH
Testing only your controllers will tell you that, broadly, your app is working, at least in terms of not 500'ing or whatever. But can you be sure that it is doing the exactly correct thing? If all you need to test is standard resourceful behaviour like "given params[:id], is the record with id <params[:id]> loaded?" then just testing the controller might be enough.
But, you will inevitably add more complicated behaviour into your models. In this situation, your controller may set some variables or something, without raising an error, by calling a model method. At this stage, it's much cleaner to test the model method directly, to make sure that given a particular set of conditions, it does the right thing.

Structure of BDD tests

I'm digging into Capybara and rspec, to move from TDD to BDD.
My generators make a whole lot of directories and spec tests,
with directory structure similar to this:
spec
controllers
models
requests
routing
views
I think that most of this is TDD rather than BDD. If I read here:
"A great testing strategy is to extensively cover the data layer with
unit tests then skip all the way up to acceptance tests. This approach
gives great code coverage and builds a test suite that can flex with a
changing codebase."
Then I figure that things should be quite different
something on the lines of:
spec
models
acceptance
Basically I take out controllers, requests, views, and routing to just implement tests as user case scenarios in the acceptance directory with Capybara, Rspec.
This makes sense to me, though I'm not sure if this is the standard/common approach to this.
What is your approach?
Thanks,
Giulio
tl;dr
This is not a standard approach.
If you only test models and feature specs... then you miss out on the bits in the middle.
You can tell: "method X broke on the Widget model" or you can tell "there's something wrong while creating widgets" but you have no knowledge of anything else.
If something broke, was it the controller? the routing? some hand-over between the two?
it's good to have:
extremely thorough testing at the model-level (eg check every validation, every method, every option based on incoming arguments)
rough testing in the middle to make sure sub-systems work the way you expect (eg controllers set up the right variables and call the right templates/redirections given a certain set of circumstances)
overall feature testing as smoke-tests (eg that a user can go through the happy path and everything works the way they expect... that if they input bad stuff, that the app is throwing up the right error messages and redisplaying the forms for them to fix the problem)
Don't forget that models aren't the only classes in your app.. and all classes need some kind of testing. Controllers are classes too. As are form and service objects, mailers, etc.
That said - it's common to consider that view-tests are going overboard. I'm also not keen on request-tests our routing test myself (unless I have something complex which I want to work right, eg lots of optional params in a route that map to interesting search-patterns)

Spec for testing a controller method that calls model methods

I'm refactoring my controllers to by moving the logic to the model. I was finding it hard to test my controller methods when they had so much logic (also was not able to reuse logic in controllers). Now I'd like to understand how to write specs for these controllers. I'm following this testing guide.
Here's an example:
def dashboard
#sorted_deals = Deal.deals_for_user(current_user)
end
This calls a class method that has some logic that finds the relevant deals and sorts them appropriately. It feels like unessary duplication to test deals_for_user again (I already test it in my Model spec). How do I test this method without needless duplication? Is this a case to use mocks or stubs?
There is some debate on the usefulness of controller tests in Rails. Personally I unit test the shit out of my models and the only real tests that I do for controllers are integration testing using a headless browser such as Selenium with capybara. If you know your method works and is tested, and the view/associated logic works and is tested through integration tests, controller testing is almost a waste of time. There are other opinions on this and I am in no way a guru but I thought I would put in my $00.02.

Should I really test controllers?

I'm trying to get the best codecoverage/development time result
Currently I use rspec+shoulda to test my models and rspec+capybara to write my acceptance tests.
I tried writing a controller test for a simple crud but it kinda took too long and I got a confusing test in the end(my bad probably)
What`s the best pratice on controller testing with rspec?
Here is a gist on my test and my controller(one test does not pass yet):
https://gist.github.com/991687
https://gist.github.com/991685
Maybe not.
Sure you can write tests for your controller. It might help write better controllers. But if the logic in your controllers is simple, as it should be, then your controller tests are not where the battle is won.
Personally I prefer well-tested models and a thorough set of integration (acceptance) tests over controller tests any time.
That said, if you have trouble writing tests for controllers, then by all means do test them. At least until you get the hang of it. Then decide whether you want to continue or not. Same goes for every kind of test: try it until you understand it, decide afterwards.
The way I view this is that acceptance tests (i.e. Cucumber / Capybara), test the interactions that a user would normally perform on the application. This usually includes things like can a user create a specific resource with valid data and then do they see errors if they enter invalid data. A controller test is more for things that a user shouldn't be able to normally do or extreme edge cases that would be too (cu)cumbersome to test with Cucumber.
Usually when people write controller tests, they are effectively testing the same thing. The only reason to test a controller's method in a controller test are for edge cases.
Edge cases such as if a user enters an invalid ID to a show page they should be shown a 404 page. This is a very simple kind of thing to test with a controller test, and I would recommend doing that. You want to make sure that when they hit the action that they receive a 404 response, boom, simple.
Making sure that your new action responds successfully and doesn't syntax error? Please. That's what your Cucumber features would tell you. If the action suddenly develops a Case of the Whoops, your feature will break and then you will fix that.
Another way of thinking about it is do you want to test a specific action responds in a certain way (i.e. controller tests), or do you care more about that a user can go to that new action and actually go through the whole motions of creating that resource (i.e. acceptance tests)?
Writing controller tests gives your application permission to lie to you. Some reasons:
controller tests are not executed in the environment they are run in. i.e. they are not at the end of a rack middleware stack, so things like users are not available when using devise (as a single, simple example). As Rails moves more to a rack based setup, more rack middlewares are used, and your environment deviates increasingly from the 'unit' behaviour.
You're not testing the behaviour of your application, you're testing the implementation. By mocking and stubbing your way through, you're re-implementing implementation in spec form. One easy way to tell if you're doing this; if you don't change the expected behaviour of url response, but do change the implementation of the controller (maybe even map to a different controller), do your tests break? If they do, you're testing implementation not behaviour. You're also setting your self up to be lied to. When you stub and mock, there's no assurances that the mocks or stubs you've setup do what you think they do, or even if the methods they're pretending to be exists after refactoring occurs.
Calling controller methods is impossible via your applications 'public' api. The only way to get to a controller is via the stack, and the route. If you can't break it from a request via a url, is it really broken?
I use my tests as an assurance the my application is not going to break when I deploy it. Controller tests add nothing to my confidence that my application is indeed functional, and actually their presence decreases my confidence.
One other example, when testing your 'behaviour' of your application, do you care that a particular file template was rendered, or that a certain exception was raised, or instead is the behaviour of your application to return some stuff to the client with a particular status code?
Testing controllers (or views) increases the burden of tests that you impose on yourself, and means that the cost of refactoring is higher than it needs to be because of the potential to break tests.
Should you test? yes
There are gems that make testing controllers faster
http://blog.carbonfive.com/2010/12/10/speedy-test-iterations-for-rails-3-with-spork-and-guard/
Definitely test the controller. A few painfully learned rules of thumb:
mock out model objects
stub model object methods that your controller action uses
sacrifice lots of chickens.
I like to have a test on every controller method at least just to eliminate stupid syntax errors that may cause the page to blow up.
A lot of people seem to be moving towards the approach of using Cucumber for integration testing in place of writing controller and routing tests.

Mocks and Stubs

I really don't understand what Mocks and Stubs are. I want to know when, why and how we use Mocks in our test cases. I know that there are good frameworks out there for Mocks and Stubs in Ruby on Rails, but without knowing the purpose, I'm reluctant to use them in my app.
Can you please clarify about Mocks and Stubs? Please help.
My very simplified answer is:
mocks are objects that have a similar interface as something else
stubs are fake methods and return a specific answer
With both we are trying to achieve the same thing: we want to test a specific unit (model/view/controller/module) in isolation. E.g. when we are testing the controller, we do not want to test our model, so we use a mock. We want to make sure the correct methods are called, e.g. find. So on our mock, we have a stub that will return something predefined, without actually going to the database.
So we test for expectations: the methods that we expect to be called (on other units), without actually calling them. The test of that specific method, should have been covered in its own test.
According to Fowler's article mocks are not stubs, stubs are fake methods independent from outside calls, while mocks are fake objects with pre-programmed reactions to calls.
Mocking is more specific and object-related:
if certain parameters are passed, then the object returns certain results. The behavior of an object is imitated or "mocked".
Stubbing is more general and method-related:
a stubbed method usually returns always the same result for all parameters. The behavior of a method is frozen, canned or "stubbed".
Mocks are used in interaction-based testing to verify behavior. With a mock, you can assert that the method under test called another method. For example, I might want to make sure that a controller object calls a repository to get some data.
Stubs are used in state-based testing to set up a certain application state. Unlike mocks, you don't worry whether the call was made or not. For example, if you were testing some repository code, you might want to set up a stub method to make sure that the repository correctly handles the case when the database connection is closed.

Resources