What's worth testing in Ruby on Rails? [closed] - ruby-on-rails

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm doing a contract job in Ruby on Rails and it is turning into a disaster for productivity largely because of writing tests. This is my first time using TDD in practice and it hasn't gone well because I am spending so much time writing tests that I've hardly gotten any results to show. I'm thinking that perhaps I'm trying to test too much by writing tests for every feature in each model and controller.
If I can't aim for 100% test coverage, what are some criteria that I could use to determine "is this feature worth testing"? For example, would integration tests trump unit tests?

If you're just getting started with testing in the ruby or Rails world, I'd make the following suggestions:
Start with rspec. Automated acceptance/integration testing with a tool like Cucumber can be a large timesink for a single developer who has never used it before. Success with those tools is often contingent upon A) quality UI specs that are very very specific, B) UI conventions which are easily testable with headless browser emulators and C) Familiarity with the tools ahead of go time.
Test individual methods. Test that they return the values you expect. Test that when you feed them bad data, they respond in an appropriate manner. Test any edge cases as you become aware of them.
Be very careful that you are stubbing and mocking correctly in your tests. It's easy to write a test for 30 minutes only to discover that you're not really testing the thing you need to be testing.
Don't go overboard with micro-managing your TDD - some folks will tell you to test every tiny step in writing a method: first test that the model has a method called 'foo', then test whether it returns non-nil, then test that it returns a string, then test that the string contains a certain substring. While this approach can be helpful when you're implementing something complex, it can be a time sink as well. Just skip the first two steps. That being said, it's easy to go too far in the other direction, specifying a method's behavior with a complex test before you begin implementing it, then beginning the implementation only to find you've botched the test.
Don't write tests that just say 'this is how i wrote the feature, don't change it'. Tests should reflect the essential business logic of a method. If you are writing tests specifically so that they will fail if another developer changes some non-critical part of your implementation, you are wasting time and adding superfluous lines of code.
These are just a few observations I've made from having been in similar situations. Best of luck, testing can be a lot of fun! No, really. I mean it.

100% test coverage is a fantasy and a waste of time. Your tests should serve a purpose, typically to give you confidence that the code you wrote works. Not absolute confidence, but some amount of confidence. TDD should be a tool, not a restriction.
If it's not making your work come out better, why are you doing it? More importantly, if you fail to produce useful code and lose the contract, those tests weren't too useful after all were they? It's a balance, and it sounds like you're on the wrong side.
If you're new to Rails, you can get a small dose of its opinionated creator's view on testing in this 37signals blog article on the topic. Small rules of thumb, but maybe something to push you in a new direction on the subject.
There are also good references on improving your use of RSpec like betterspecs.org, The RSpec Book and Everyday Rails Testing with RSpec. Using it poorly can result in a lot of headache maintaining the specs.

My advice is to try and get your testing and your writing of code as tightly coupled as possible, combined with an Agile approach to the project.
This way you will constantly have new stuff to show the client as testing will just be baked in. The biggest mistake I see with teams that are new to testing is to continue to see the testing as a separate activity. Most of all I continue to see developers say that a feature is done... but will need some refactoring and some better tests at "some points". "Some point" rarely comes. One thing is inescapable though - at least for several months it will be much slower in the short term but much better quality and you'll avoid building the "big ball of mud" I've seem in so many larger institutions.

A few things:
Don't
Test the database
Test ActiveRecord or whatever ORM you're using
Do
For models:
Test validations
Test custom logic
For controllers:
Test non-trivial routes
Test redirects
Test authentication
Test instance variable assignment
For views:
I haven't gotten around to testing views, but I've run into situations where I wish I did. For example testing fields in forms.
More at Rails Guides

Related

Is testing relations in Rails considered to be a best practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
Wondering if testing of a relations in Rails widely considered to be a best practice or not.
We have disagreement on the team on testing relations.
IMO it's redundant, i.e. to have each has_many :people to be tested with should have_many(:people) as it looks repetitive, just copy-paste from model to a test - i.e. I cannot imagine it stop working or some other workflow break it, the only way to break it - is just remove or change the line (but if someone just changing code randomly that someone can also just remove the test as well). Moreover it doesn't actually test relation (like if Company returns people who have appropriate company_id and if company destroyed - people are actually destroyed - as it can actually fail in case of destroy validation or foreign key) but it just test that relation is specified.
And we don't test like that class respond to every single method but if we do test - we test what method does.
As we don't test other static things, like templates (like simple HTML with no logic) - we assume rails will generate specified HTML, it's impossible to break unless someone just change it.
At the other hand there is an argument can be made that relations are extremely important part of the app and in effort to have 100% coverage that also should be tested.
Usually in case of disagreements we look to best practices but I cannot find any mention of relation tests - neither to do it or not.
Can you please help me to find out is it common practice or not. (links to best practices that says about that or similar thing or your experience working with those)
Thank you in advance
You've asked an opinion-based question that is hard to answer with sources, so you might need to rethink your question. However I'll give it a try.
I don't know about best practices, but in my opinion anything that can be "fat-fingered" should be tested, relationships/associations included. Now you want to find out how to avoid testing Rails logic, and instead just test that the relation was setup, but that's not tough to do.
Minitest suites I contribute to all have tests for the existence of expected relationships/associations.
Again, the idea is that if someone accidentally removes or adds a character or two to that line of code, there is a specific test to catch it. Not only that, but removing the line of code deliberately should also include removing a test deliberately.
Yes of course if someone wants to remove that line of code completely, and go remove a test, they should be able to do that. And at that point the assumption is that the entire task was deliberate. So that's not an argument to avoid testing in my opinion. The test is there to catch accidental mistakes, not deliberate actions.
Additionally, just because the test seems like copy/paste or repetition, that's also not a reason to avoid it in my opinion. The better the application code is, the more all tests will start to look repetitive or like copy/paste boilerplate. That's actually a good thing. It means the application code does just one small thing (and likely does it well). The more repetitive tests get, the easier they are to write, the more likely they are to be written, and the more you can refactor, simplify, and DRY them up as well.
In my opinion, this should be a best practice and I've not had much push-back from any of about a dozen other Rails developers I've worked with personally. And on a wider scale, the fact that shoulda-matchers have specific matchers for this means there are enough other developers out there wanting this capability.
Here is a minitest example of testing a relationship/association without testing Rails logic itself:
test 'contains a belongs_to relationship to some models' do
expected = [:owner, :make].sort
actual = Car.reflect_on_all_associations(:belongs_to).map(&:name).sort
assert_equal(expected, actual)
end
To your point of the fact that it doesn't test the actual behavior of the code, and only tests that the relationship was defined when you'd expect a method written on a model to test the actual behavior of the method itself, not just that is was defined...
That's because you as an end-developer wrote that model method, so its behavior should be tested. But you do not want to test logic existing in the Rails core, as the Rails team has already written the tests for that.
Said another way, it makes perfect sense not to test the functionality of the association, but only test that it is defined, because the functionality is tested already by the Rails test suite.
In our company we don't test Rails internal logic.
We don't check that Rails handle has_many, belongs_to etc. correctly.
Thats Rails intern stuff you shouldn't have to bother about.
Normally you have more than enough other stuff to test.

Integration test best practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I looked in stackoverflow and could fine one or two questions that have a similar title than this one, but none of it answers what I'm asking. Sorry if this is duplicated.
In unity tests, there is a guideline that says "One assertion per test". By reading around stackoverflow and the internet, it is commonly accepted that this rule can be relaxed a bit, but every unit test should test one aspect of the code, or one behavior. This works well because when a test fails you can immediately see what failed and fixing it most likely the test will not fail again in other point in the future.
This works well for Rails unit tests, and I have been using it for functional testing as well without any problem. But when it comes to integration tests, it is somewhat implicit that you should have many assertions in your tests. Apart from that, they usually repeat tests that are already done once in functional and in unit tests.
So, what are considered good practices when writing integration tests in these two factors:
Length of the integration tests: How to measure when a integration test should be splited in two? Number of requests? Or larger is always better
Number of assertions on integration tests: Should it repeat the assertions presented on unit tests and functional tests about the current state of the system every time, or should it have only 5 or so asserts on the end to test if the correct output was generated?
Hopefully someone will provide a more authoritative answer, but my understanding is that an integration test should be built around a specific feature. For example, in an online store, you might write one integration test to make sure that it's possible to add items to your cart, and another integration test to make sure it's possible to check out.
How long should an integration test be?
As long as it takes to cover a feature, and no more. Some features are small, some are large, and their size is a matter of taste. When they're too big, they can easily be decomposed into several logical sub-features. When they're too small, their integration tests will look like view or controller tests.
How many assertions should they have?
As few as possible, while still being useful. This is true of all tests, but it goes doubly for integration tests because they're so slow. This means testing only the things that are most important, and trying not to test things that are implied by other data. In the case of the checkout feature, I might assert that the order was created for the right person and has the right total, but leave the exact items untested (since my architecture might generate the total from the items). I wouldn't make any assertions before that that I didn't have to, since traversing the application—filling this field, clicking that button, waiting for this modal to open—covers all the integration behavior I need tested, and anything else could be covered by view tests if they need to be tested at all.
All together, in general this means that whereas unit tests tend to be only a couple lines long and preceded by a larger setup block, Rails integration tests tend to be a dozen lines long or more (most of which are interaction), and lack a setup block entirely.
Length of the integration tests: I agree that length here doesn't matter that much. It's more about the feature you're testing and how many steps does it take to test it. For example let's say you're testing a wizard of five steps which creates a project. I would put all the five steps in one test end check if the relevant data appeared on screen. However I would split the test if the wizard would allow for different scenarios that need to be covered.
Number of assertions on integration tests: Don't test things that are already tested in other test, but make sure user expectations are met. So test what, the user expects to see on the screen not back-end specific code. Sometimes you might still need to check the right data is in the database, for example when its not supposed to appear on the screen.

How to shift development of an existing MVC3 app to a TDD approach?

I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.

BDD with Cucumber and rspec - when is this redundant?

A Rails/tool specific version of: How deep are your unit tests?
Right now, I currently write:
Cucumber features (integration tests) - these test against the HTML/JS that is returned by our app, but sometimes also tests other things, like calls to third-party services.
RSpec controller tests (functional tests), originally only if the controllers have any meaningful logic, but now more and more.
RSpec model tests (unit tests)
Sometimes this is entirely necessary; it is necessary to test behavior in the model that is not entirely obvious or visible to the end-user. When models are complex, they should definitely be tested. But other times, it seems to me the tests are redundant. For instance, do you test method foo if it is only called by bar, and bar is tested? What if bar is a simple helper method on a model that is used by and easily testable in a Cucumber feature? Do you test the method in rspec as well as Cucumber? I find myself struggling with this, as writing more tests take time and maintaining multiple "versions" of what is effectively the same behaviors, which makes maintaining the test suite more time intensive, which in turn makes changes more expensive.
In short, do you believe there is there a time when writing only Cucumber features is enough? Or should you always test at every level? If you think there is a grey area, what is your threshold for "this needs a functional/unit test." In practical terms, what do you do currently, and why (or why not) do you think it's sufficient?
EDIT: Here's an example of what might be "test overkill." Admittedly, I was able to write this pretty quickly, but it was completely hypothetical.
Good question, one I've grappled with recently while working on a Rails app, also using Cucumber/RSpec. I try to test as much as possible at every level, however, I've also found that as the codebase grows, I sometimes feel I'm repeating myself needlessly.
Using "Outside-in" testing, my process usually goes something like: Cucumber Scenario -> Controller Spec -> Model Spec. More and more I find myself skipping over the controller specs as the cucumber scenarios cover much of their functionality. I usually go back and add the controller specs, but it can feel like a bit of a chore.
One step I take regularly is to run rcov on my cucumber features with rake cucumber:rcov and look for notable gaps in coverage. These are areas of the code I make sure to focus on so they have decent coverage, be it unit or integration tests.
I believe models/libs should be unit tested extensively, right off the bat, as it is the core business logic. It needs to work in isolation, outside of the normal web request/response process. For example, if I'm interacting with my app through the Rails console, I'm working directly with the business logic and I want the reassurance that methods I call on my models/classes are well tested.
At the end of the day, every app is different and I think it's down to the developer(s) to determine how much test coverage should be devoted to different parts of the codebase and find the right balance so that your test suite doesn't bog you down as your app grows.
Here's an interesting article I dug up from my bookmarks that is worth reading:
http://webmozarts.com/2010/03/15/why-not-to-want-100-code-coverage/
Rails has a well-tested codebase, so I'd avoid re-testing stuff that is covered in those steps.
For example, unless it is custom code, it is pointless to test the results of validations at unit and functional levels. I'd test them at the integration level though. Cucumber features act as specifications for your project, so it is good to specify that you need a validation for x and y, even if the implementation is a single line declaration in the model.
You usually don't want to have both Cucumber stories and RSpec controller specs/integration tests. Pick one (generally Cucumber is the better choice, except for certain special cases). Then use RSpec for your models, and that's all you need.
I test complex model/lib methods with rspec then the main business logic in web with cucumber, so I'm sure that the main features of the web will work 100%, then if I got more time and resources I test everything else.
Its easier to write simple specs for simple methods. Its much easier than writing cukes.
If you keep your methods simple - and keep your specs simple - by testing only the logic inside that method - you will find joy in unit testing.
If anything is redundant its cucumber tests. If you have well tested models and lib your software should work.
The purpose of Cucumber is not to run integration tests. Cucumber, an in general BDD, works as a communication platform, a way to improve the "talk" inside a big team of developers an non-developers that are developing big and complex software. BDD is very useful to communicate a model an its domain at the same level to everybody in the team, even if they don't know anything about computers.
If that is not your scenario, don't use cucumber, because you don't need it. Use rspec and capybara to test your JS code and your integration tests.

Pluses and minuses of using Factories in a Rails test suite?

I'm currently looking at a hefty Rails test suite. It's nothing I can get into specifics about, but the run time for the entire suite (unit/functional/some integration) can run upward of 5 minutes.
We're completely reliant on fixtures and are we're not mocking and stubbing as much as we should be.
Our next few sprints are going to be completely focused on the test suite, both improving coverage, writing better tests and most importantly writing more efficient tests.
So aside from more mocking and stubbing within our tests, we're considering replacing our fixtures with most likely Factory Girl. I see a lot of happy folks doing similar situations but haven't been able to find a good resource on any minuses of moving to a factory. I have seen some slower benchmarks when using benchmarks from various resources but cannot find a definitive this why factories are good and this is why you might not want to use them.
Can anyone educate me on why or why I shouldn't be using factories?
Thanks!
Oleg's answer is great, but let me offer the perspective of someone who is using both.
Fixtures have sort of been the whipping boy of the Rails community for a while. Everyone understands the drawbacks of fixtures, but no one is really championing their strengths. In my experience, factories by themselves can easily become just as difficult to maintain as fixtures (it really depends on the schema, but I digress). The real strength of factories is in selective replacement of fixture-based pain. Let's talk about a couple specifics.
The first issue is performance. If you can test most of your app without hitting the database then you will see a significant speed up, but for most applications I don't think it's wise to test without hitting the database entirely. At some point you want to test the whole stack. Every time you mock or stub you are making an assumption about an interface that may contain subtle bugs. So, assuming that you need to hit the database on some significant percentage of tests, transactional fixtures (you are using transactional fixtures right?) could well be much much faster than instantiating a whole environment for every test.
I'd say, with the size of your test suite that you really need to look towards Continuous Integration to scale your development to the next level. No matter how much you speed them up, it's still a long time for developers to wait. Maybe look at autotest as well to help at the individual level. But ultimately CI is going to allow you to maintain testing discipline without sacrificing developer agility.
The place where fixtures really shine is in functional/integration testing. The way I look at it is that the fixtures should set up a healthy base state for the app to be tested. Most unit tests don't really need this. You can get very good unit coverage using factories. However when it comes to functional testing, any given page may be hitting dozens of models. I don't want to set up all that stuff in each test. As I construct ever more complex scenarios, I'm getting closer and closer to recreating a global data state which is exactly what fixtures were designed to do in the first place.
One controversial belief I hold is that all else being equal, I prefer one functional test to 20 unit tests (using Rails parlance). Why? Because the functional test proves that the end result that is sent to the user is correct. The unit tests are great for getting at nuances of functionality, but at the end of the day, you could still have a bug along an interface that breaks your entire site. Functional tests are what give me the confidence hitting deploy without actually loading up the page in my browser. I know that I could stub everything out and test both interfaces and get the same coverage, but if I can test the whole stack in one simple test at the expense of a little CPU, I'd much rather do that.
So what are my best practices for fixtures?
Set up a handful for every model to cover the broadest categories of data
When adding a major new feature that cuts across many models and controllers, add some new fixtures to represent the major states
Avoid editing old fixtures except for adding/removing fields
Use factories for more smaller/more localized variations
Use factories for testing pagination or other mass creation that is only needed for a few tests
Also, let me recommend Jay Fields' blog for really good pragmatic testing advice. The thing I like most about Jay's blog is that he always acknowledges that testing is very project-specific, and what works for one project does not necessarily work for another. He's short on dogma and long on pragmatism.
There could be some issues with setting up all dependencies between entities for good test suite. Anyway, it's still much easier than maintaing a lot of fixtures.
Fixtures:
hard to maintain relationships (especially many-to-many);
test suite runtime is usually slower due more DB hits;
tests are very sensitive to changes in schema.
Factories:
you stub everything you don't test at current unit test;
you prepare entities that you are testing with factories. This is where factories show their real advantage — it's easy to set up new test cases, as you don't need to maintain a ton of YAML-files for that;
you concentrate on testing. If tests require changing scenario, you don't shift your mindset. As long as stubs are reasonable and factories are easily customized, you should be fine.
So, factories seem a good way to go. The only possible drawbacks I see, are:
time you spent migrating from fixtures;
keeping a sane set of scenarios can require some effort.

Resources