I'm refactoring my controllers to by moving the logic to the model. I was finding it hard to test my controller methods when they had so much logic (also was not able to reuse logic in controllers). Now I'd like to understand how to write specs for these controllers. I'm following this testing guide.
Here's an example:
def dashboard
#sorted_deals = Deal.deals_for_user(current_user)
end
This calls a class method that has some logic that finds the relevant deals and sorts them appropriately. It feels like unessary duplication to test deals_for_user again (I already test it in my Model spec). How do I test this method without needless duplication? Is this a case to use mocks or stubs?
There is some debate on the usefulness of controller tests in Rails. Personally I unit test the shit out of my models and the only real tests that I do for controllers are integration testing using a headless browser such as Selenium with capybara. If you know your method works and is tested, and the view/associated logic works and is tested through integration tests, controller testing is almost a waste of time. There are other opinions on this and I am in no way a guru but I thought I would put in my $00.02.
Related
While rspec automatically creates specs for any helpers created by the Rails generators, I was wondering if other Rails developers find it important/useful to spec the helpers in real-world or if they often don't bother, since the helpers are often tested by proxy through testing of the components that use them?
Personally I do test helper methods, because I like to test them in isolation. If the following feature specs fails I know I probably made a mistake in my test setup because I already ensured that the helper method works.
It is also easier to test all possible scenarios. If you want to test all possibilities as part of a whole you need more test setup and sacrifice performance.
Ideally, you want to write tests for everything, but in real world with time constraints, it is not uncommon to skip simple helper method tests because you implicitly test them while building the actual test. In the same way some developers may skip private method tests.
I'm digging into Capybara and rspec, to move from TDD to BDD.
My generators make a whole lot of directories and spec tests,
with directory structure similar to this:
spec
controllers
models
requests
routing
views
I think that most of this is TDD rather than BDD. If I read here:
"A great testing strategy is to extensively cover the data layer with
unit tests then skip all the way up to acceptance tests. This approach
gives great code coverage and builds a test suite that can flex with a
changing codebase."
Then I figure that things should be quite different
something on the lines of:
spec
models
acceptance
Basically I take out controllers, requests, views, and routing to just implement tests as user case scenarios in the acceptance directory with Capybara, Rspec.
This makes sense to me, though I'm not sure if this is the standard/common approach to this.
What is your approach?
Thanks,
Giulio
tl;dr
This is not a standard approach.
If you only test models and feature specs... then you miss out on the bits in the middle.
You can tell: "method X broke on the Widget model" or you can tell "there's something wrong while creating widgets" but you have no knowledge of anything else.
If something broke, was it the controller? the routing? some hand-over between the two?
it's good to have:
extremely thorough testing at the model-level (eg check every validation, every method, every option based on incoming arguments)
rough testing in the middle to make sure sub-systems work the way you expect (eg controllers set up the right variables and call the right templates/redirections given a certain set of circumstances)
overall feature testing as smoke-tests (eg that a user can go through the happy path and everything works the way they expect... that if they input bad stuff, that the app is throwing up the right error messages and redisplaying the forms for them to fix the problem)
Don't forget that models aren't the only classes in your app.. and all classes need some kind of testing. Controllers are classes too. As are form and service objects, mailers, etc.
That said - it's common to consider that view-tests are going overboard. I'm also not keen on request-tests our routing test myself (unless I have something complex which I want to work right, eg lots of optional params in a route that map to interesting search-patterns)
I am writing some tests with RSpec (tests and not specs, the code was untested until now) and have stumbled upon an uncertainty...
I want to know whether a controller is calling the model's methods properly and I am divided between the possibilities:
test the controller with stubbing the model method (I won't know if the model method actually exists or accepts the arguments given)
leave the model method unstubbed and risk having my controller tests bleed into model test territory (and also make them slow cause of DB access and costly methods)
write multiple controller tests, each of them leaving unstubbed one model method (still slow as hell but at least it's verbose)
Is there a correct answer on this?
You could stub the model method if you want, but in general you shouldn't check in controller test that particular method of a model was called you should check controller's response content. Don't forget about black box metaphor.
I suggest you test your controllers without stubbing your models. Do not care about the speed of the tests when it hits the database. I assume you want to have the database also tested, and having a correct program is more important than the speed of your tests, isn't it?
Consider the functional tests as another layer around your unit tests, not as something that is isolated from your models. Your unit tests (models) ensure that some model methods work as expected, and then your controller tests ensure that the controller is able to use these methods, and they work as the controller expects.
As iafonov said, do not focus on the model's methods in your controller tests. Assume that if your controller is able to give you the correct response, then your model apparently works as expected.
Of course, some people have different point of view. I do not claim that my suggestion is the best. It just works for me, and I consider it being right. A lot of people suggest that you should test your controller in isolation from models, but how do you ensure then that there is no discrepancy between your stubs and your real implementation?
I'm pretty late to the party. But agree strongly with solnic and disagree with Arsen7.
Think about it:
If you are using vanilla active record methods, e.g. MyModel.find_by_id(123) you can safely stub that because AR is already well tested, no need to his the database for those.
If you are calling a custom method you defined on the model, e.g. MyModel.foo(param1, param2) then you should still mock/stub it because you should have a test for it in your MyModel spec.
The only downside to stubbing model methods is that sometimes if you change the interface for a method your controller will be ignorant of that change and the test will still pass. Typically either integration or manual tests will uncover the problem. If you are working on a large project speed quickly becomes an issue and avoiding the perf hit from interacting with the database is more than worth an occasional head scratch imho.
With good model/unit tests it's recommended to stub models in controller specs (and it is obviously recommended to have good model/unit specs heh). Full stack should be covered by requests/acceptance specs anyway. I like to treat controller specs as 'unit specs' for controllers. With skinny controllers stubbing model in specs should be easy and should not touch any implementation details.
When scaffolding controllers it will create tests for that model, do the test have the ability to check for runtime errors for the whole page including rendering the .erb
If so can tests scan for common typos in the .erb for example checkbox instead of check_box
Because silly typos take a stupid amounts of time to figure out because the code looks right.
It would be good if there was a plugin that would use a service to check if it's a common typo or gotcha.
In general I test the controller in the controller tests and the views in the view tests.
In more detail, I will test that given the right input to the controller, it produces the right output. I usually mock out the model(s) involved and concentrate on what work matters inside the controller. In my view tests I simply validate that the things on the page look like what I want them to look. I also use Jasmine to test javascript when I have more complex interactions in that body of code.
I put a lot of stock in this separation of tests.
I use rspec for my model, view, controller, routing and request tests and I write failing tests and then implement the feature/method to make those tests go green. And also I run rake spec before checking in. This gives me good coverage on the type of things you're concerned about.
There is also a gem called autotest that will run a test each time you save them (or perhaps some other granularity). If your test is green it will run the entire suite. I don't use it because I don't like how aggressively it does this but I have friends that swear by it.
I'm trying to get the best codecoverage/development time result
Currently I use rspec+shoulda to test my models and rspec+capybara to write my acceptance tests.
I tried writing a controller test for a simple crud but it kinda took too long and I got a confusing test in the end(my bad probably)
What`s the best pratice on controller testing with rspec?
Here is a gist on my test and my controller(one test does not pass yet):
https://gist.github.com/991687
https://gist.github.com/991685
Maybe not.
Sure you can write tests for your controller. It might help write better controllers. But if the logic in your controllers is simple, as it should be, then your controller tests are not where the battle is won.
Personally I prefer well-tested models and a thorough set of integration (acceptance) tests over controller tests any time.
That said, if you have trouble writing tests for controllers, then by all means do test them. At least until you get the hang of it. Then decide whether you want to continue or not. Same goes for every kind of test: try it until you understand it, decide afterwards.
The way I view this is that acceptance tests (i.e. Cucumber / Capybara), test the interactions that a user would normally perform on the application. This usually includes things like can a user create a specific resource with valid data and then do they see errors if they enter invalid data. A controller test is more for things that a user shouldn't be able to normally do or extreme edge cases that would be too (cu)cumbersome to test with Cucumber.
Usually when people write controller tests, they are effectively testing the same thing. The only reason to test a controller's method in a controller test are for edge cases.
Edge cases such as if a user enters an invalid ID to a show page they should be shown a 404 page. This is a very simple kind of thing to test with a controller test, and I would recommend doing that. You want to make sure that when they hit the action that they receive a 404 response, boom, simple.
Making sure that your new action responds successfully and doesn't syntax error? Please. That's what your Cucumber features would tell you. If the action suddenly develops a Case of the Whoops, your feature will break and then you will fix that.
Another way of thinking about it is do you want to test a specific action responds in a certain way (i.e. controller tests), or do you care more about that a user can go to that new action and actually go through the whole motions of creating that resource (i.e. acceptance tests)?
Writing controller tests gives your application permission to lie to you. Some reasons:
controller tests are not executed in the environment they are run in. i.e. they are not at the end of a rack middleware stack, so things like users are not available when using devise (as a single, simple example). As Rails moves more to a rack based setup, more rack middlewares are used, and your environment deviates increasingly from the 'unit' behaviour.
You're not testing the behaviour of your application, you're testing the implementation. By mocking and stubbing your way through, you're re-implementing implementation in spec form. One easy way to tell if you're doing this; if you don't change the expected behaviour of url response, but do change the implementation of the controller (maybe even map to a different controller), do your tests break? If they do, you're testing implementation not behaviour. You're also setting your self up to be lied to. When you stub and mock, there's no assurances that the mocks or stubs you've setup do what you think they do, or even if the methods they're pretending to be exists after refactoring occurs.
Calling controller methods is impossible via your applications 'public' api. The only way to get to a controller is via the stack, and the route. If you can't break it from a request via a url, is it really broken?
I use my tests as an assurance the my application is not going to break when I deploy it. Controller tests add nothing to my confidence that my application is indeed functional, and actually their presence decreases my confidence.
One other example, when testing your 'behaviour' of your application, do you care that a particular file template was rendered, or that a certain exception was raised, or instead is the behaviour of your application to return some stuff to the client with a particular status code?
Testing controllers (or views) increases the burden of tests that you impose on yourself, and means that the cost of refactoring is higher than it needs to be because of the potential to break tests.
Should you test? yes
There are gems that make testing controllers faster
http://blog.carbonfive.com/2010/12/10/speedy-test-iterations-for-rails-3-with-spork-and-guard/
Definitely test the controller. A few painfully learned rules of thumb:
mock out model objects
stub model object methods that your controller action uses
sacrifice lots of chickens.
I like to have a test on every controller method at least just to eliminate stupid syntax errors that may cause the page to blow up.
A lot of people seem to be moving towards the approach of using Cucumber for integration testing in place of writing controller and routing tests.