Integration test best practices [closed] - ruby-on-rails

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I looked in stackoverflow and could fine one or two questions that have a similar title than this one, but none of it answers what I'm asking. Sorry if this is duplicated.
In unity tests, there is a guideline that says "One assertion per test". By reading around stackoverflow and the internet, it is commonly accepted that this rule can be relaxed a bit, but every unit test should test one aspect of the code, or one behavior. This works well because when a test fails you can immediately see what failed and fixing it most likely the test will not fail again in other point in the future.
This works well for Rails unit tests, and I have been using it for functional testing as well without any problem. But when it comes to integration tests, it is somewhat implicit that you should have many assertions in your tests. Apart from that, they usually repeat tests that are already done once in functional and in unit tests.
So, what are considered good practices when writing integration tests in these two factors:
Length of the integration tests: How to measure when a integration test should be splited in two? Number of requests? Or larger is always better
Number of assertions on integration tests: Should it repeat the assertions presented on unit tests and functional tests about the current state of the system every time, or should it have only 5 or so asserts on the end to test if the correct output was generated?

Hopefully someone will provide a more authoritative answer, but my understanding is that an integration test should be built around a specific feature. For example, in an online store, you might write one integration test to make sure that it's possible to add items to your cart, and another integration test to make sure it's possible to check out.
How long should an integration test be?
As long as it takes to cover a feature, and no more. Some features are small, some are large, and their size is a matter of taste. When they're too big, they can easily be decomposed into several logical sub-features. When they're too small, their integration tests will look like view or controller tests.
How many assertions should they have?
As few as possible, while still being useful. This is true of all tests, but it goes doubly for integration tests because they're so slow. This means testing only the things that are most important, and trying not to test things that are implied by other data. In the case of the checkout feature, I might assert that the order was created for the right person and has the right total, but leave the exact items untested (since my architecture might generate the total from the items). I wouldn't make any assertions before that that I didn't have to, since traversing the application—filling this field, clicking that button, waiting for this modal to open—covers all the integration behavior I need tested, and anything else could be covered by view tests if they need to be tested at all.
All together, in general this means that whereas unit tests tend to be only a couple lines long and preceded by a larger setup block, Rails integration tests tend to be a dozen lines long or more (most of which are interaction), and lack a setup block entirely.

Length of the integration tests: I agree that length here doesn't matter that much. It's more about the feature you're testing and how many steps does it take to test it. For example let's say you're testing a wizard of five steps which creates a project. I would put all the five steps in one test end check if the relevant data appeared on screen. However I would split the test if the wizard would allow for different scenarios that need to be covered.
Number of assertions on integration tests: Don't test things that are already tested in other test, but make sure user expectations are met. So test what, the user expects to see on the screen not back-end specific code. Sometimes you might still need to check the right data is in the database, for example when its not supposed to appear on the screen.

Related

Is testing relations in Rails considered to be a best practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
Wondering if testing of a relations in Rails widely considered to be a best practice or not.
We have disagreement on the team on testing relations.
IMO it's redundant, i.e. to have each has_many :people to be tested with should have_many(:people) as it looks repetitive, just copy-paste from model to a test - i.e. I cannot imagine it stop working or some other workflow break it, the only way to break it - is just remove or change the line (but if someone just changing code randomly that someone can also just remove the test as well). Moreover it doesn't actually test relation (like if Company returns people who have appropriate company_id and if company destroyed - people are actually destroyed - as it can actually fail in case of destroy validation or foreign key) but it just test that relation is specified.
And we don't test like that class respond to every single method but if we do test - we test what method does.
As we don't test other static things, like templates (like simple HTML with no logic) - we assume rails will generate specified HTML, it's impossible to break unless someone just change it.
At the other hand there is an argument can be made that relations are extremely important part of the app and in effort to have 100% coverage that also should be tested.
Usually in case of disagreements we look to best practices but I cannot find any mention of relation tests - neither to do it or not.
Can you please help me to find out is it common practice or not. (links to best practices that says about that or similar thing or your experience working with those)
Thank you in advance
You've asked an opinion-based question that is hard to answer with sources, so you might need to rethink your question. However I'll give it a try.
I don't know about best practices, but in my opinion anything that can be "fat-fingered" should be tested, relationships/associations included. Now you want to find out how to avoid testing Rails logic, and instead just test that the relation was setup, but that's not tough to do.
Minitest suites I contribute to all have tests for the existence of expected relationships/associations.
Again, the idea is that if someone accidentally removes or adds a character or two to that line of code, there is a specific test to catch it. Not only that, but removing the line of code deliberately should also include removing a test deliberately.
Yes of course if someone wants to remove that line of code completely, and go remove a test, they should be able to do that. And at that point the assumption is that the entire task was deliberate. So that's not an argument to avoid testing in my opinion. The test is there to catch accidental mistakes, not deliberate actions.
Additionally, just because the test seems like copy/paste or repetition, that's also not a reason to avoid it in my opinion. The better the application code is, the more all tests will start to look repetitive or like copy/paste boilerplate. That's actually a good thing. It means the application code does just one small thing (and likely does it well). The more repetitive tests get, the easier they are to write, the more likely they are to be written, and the more you can refactor, simplify, and DRY them up as well.
In my opinion, this should be a best practice and I've not had much push-back from any of about a dozen other Rails developers I've worked with personally. And on a wider scale, the fact that shoulda-matchers have specific matchers for this means there are enough other developers out there wanting this capability.
Here is a minitest example of testing a relationship/association without testing Rails logic itself:
test 'contains a belongs_to relationship to some models' do
expected = [:owner, :make].sort
actual = Car.reflect_on_all_associations(:belongs_to).map(&:name).sort
assert_equal(expected, actual)
end
To your point of the fact that it doesn't test the actual behavior of the code, and only tests that the relationship was defined when you'd expect a method written on a model to test the actual behavior of the method itself, not just that is was defined...
That's because you as an end-developer wrote that model method, so its behavior should be tested. But you do not want to test logic existing in the Rails core, as the Rails team has already written the tests for that.
Said another way, it makes perfect sense not to test the functionality of the association, but only test that it is defined, because the functionality is tested already by the Rails test suite.
In our company we don't test Rails internal logic.
We don't check that Rails handle has_many, belongs_to etc. correctly.
Thats Rails intern stuff you shouldn't have to bother about.
Normally you have more than enough other stuff to test.

What's worth testing in Ruby on Rails? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm doing a contract job in Ruby on Rails and it is turning into a disaster for productivity largely because of writing tests. This is my first time using TDD in practice and it hasn't gone well because I am spending so much time writing tests that I've hardly gotten any results to show. I'm thinking that perhaps I'm trying to test too much by writing tests for every feature in each model and controller.
If I can't aim for 100% test coverage, what are some criteria that I could use to determine "is this feature worth testing"? For example, would integration tests trump unit tests?
If you're just getting started with testing in the ruby or Rails world, I'd make the following suggestions:
Start with rspec. Automated acceptance/integration testing with a tool like Cucumber can be a large timesink for a single developer who has never used it before. Success with those tools is often contingent upon A) quality UI specs that are very very specific, B) UI conventions which are easily testable with headless browser emulators and C) Familiarity with the tools ahead of go time.
Test individual methods. Test that they return the values you expect. Test that when you feed them bad data, they respond in an appropriate manner. Test any edge cases as you become aware of them.
Be very careful that you are stubbing and mocking correctly in your tests. It's easy to write a test for 30 minutes only to discover that you're not really testing the thing you need to be testing.
Don't go overboard with micro-managing your TDD - some folks will tell you to test every tiny step in writing a method: first test that the model has a method called 'foo', then test whether it returns non-nil, then test that it returns a string, then test that the string contains a certain substring. While this approach can be helpful when you're implementing something complex, it can be a time sink as well. Just skip the first two steps. That being said, it's easy to go too far in the other direction, specifying a method's behavior with a complex test before you begin implementing it, then beginning the implementation only to find you've botched the test.
Don't write tests that just say 'this is how i wrote the feature, don't change it'. Tests should reflect the essential business logic of a method. If you are writing tests specifically so that they will fail if another developer changes some non-critical part of your implementation, you are wasting time and adding superfluous lines of code.
These are just a few observations I've made from having been in similar situations. Best of luck, testing can be a lot of fun! No, really. I mean it.
100% test coverage is a fantasy and a waste of time. Your tests should serve a purpose, typically to give you confidence that the code you wrote works. Not absolute confidence, but some amount of confidence. TDD should be a tool, not a restriction.
If it's not making your work come out better, why are you doing it? More importantly, if you fail to produce useful code and lose the contract, those tests weren't too useful after all were they? It's a balance, and it sounds like you're on the wrong side.
If you're new to Rails, you can get a small dose of its opinionated creator's view on testing in this 37signals blog article on the topic. Small rules of thumb, but maybe something to push you in a new direction on the subject.
There are also good references on improving your use of RSpec like betterspecs.org, The RSpec Book and Everyday Rails Testing with RSpec. Using it poorly can result in a lot of headache maintaining the specs.
My advice is to try and get your testing and your writing of code as tightly coupled as possible, combined with an Agile approach to the project.
This way you will constantly have new stuff to show the client as testing will just be baked in. The biggest mistake I see with teams that are new to testing is to continue to see the testing as a separate activity. Most of all I continue to see developers say that a feature is done... but will need some refactoring and some better tests at "some points". "Some point" rarely comes. One thing is inescapable though - at least for several months it will be much slower in the short term but much better quality and you'll avoid building the "big ball of mud" I've seem in so many larger institutions.
A few things:
Don't
Test the database
Test ActiveRecord or whatever ORM you're using
Do
For models:
Test validations
Test custom logic
For controllers:
Test non-trivial routes
Test redirects
Test authentication
Test instance variable assignment
For views:
I haven't gotten around to testing views, but I've run into situations where I wish I did. For example testing fields in forms.
More at Rails Guides

How to shift development of an existing MVC3 app to a TDD approach?

I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.

How to assure I am testing everything, I have all the features and only those features for old code?

We are running a pretty big website, we have some critical legacy code there we want to have well covered.
At the same time we would like to have a report of the features we are currently supporting and covered. And also we want to be sure we really cover every possible corner case. Some code paths are critical and will need many more tests even after achieving 100% coverage.
As we are already using rspec and rspec has "feature" and "scenario" keywords, we tried to make a list using rspec rather than going for cucumber but I think this question can be applied to any testing tool.
We want something like this:
feature "each advertisement will be shown a specified % of impressions"
scenario "As ..."
This feature is minimal from the point of view of managers but huge in the code. It involves a backend tool, a periodic task, logic in the models and views in backend and front end.
We tried to divide it like this:
feature "each creative will be shown a specified % of impressions"
context "configuration"
context "display"
scenario "..."
context "models"
it "should ..."
context "frontend"
context "display"
scenario "..."
context "models"
it "should ..."
Configuration takes place in another tool, display would contain integration tests and models would contain unit test.
I repeat myself but the idea is sto assure that the feature is really finished(including building the configuration tool) and 100% tested.
But looking at this file, it is not integration, nor unit test not even belong to any particular project.
Definitely there should be a better way of managing this.
Any experiences, resources, ideas you can share to guide us ?
The scenario you're describing is a huge reason why BDD is so popular. It forces you to write code in a way that's easy to test. Having said that, you're obviously not going to go back and rewrite the entire legacy application. There are a few things you should consider though:
As you go through each section of the application, you should ask yourself 'Will it be harder to refactor than to write tests for this?'. Sometimes refactoring before writing tests just cannot be avoided.
Testing isn't about 100% coverage, it's about 100% confidence. As you mentioned, you plan on writing more tests even when you have 100% coverage. This is because you're going for confidence. Once you're confident in a piece of code, move on. You can always come back to it at a later time.
From my experience, Cucumber is easier for tests that cover a large portion of the application. I think the reason for this is that writing out the tests in plain english makes you think of things you wouldn't have otherwise. It also allows you to focus on the behavior instead of the code and can make refactoring a less daunting task.
You don't really get much out of adding tests to existing code if you never touch that code again. Start with testing the code you want to make changes (i.e. refactor) to first.
I also recommend the book Rails Test Prescriptions, specifically one of the last chapters called "Testing a Legacy Application".

Best Practice: Order of application design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I can think of quite a few components that need to be created when authoring a web application. I know it should probably be done incrementally, but I'd like to see what order you usually tackle these tasks in. Layout your usual order of events and some justification.
A few possible components or sections I've thought of:
Stories (i.e. pivotaltracker.com)
Integration tests (Rspec, Cucumber, ...)
Functional tests
Unit Tests
Controllers
Views
Javascript functionality
...
The question is, do you do everything piecemeal? (one story, one integration test, get it passing, move onto the next one, ...) OR complete all of one component first then move onto the next.
I'm a BDDer, so I tend to do outside-in. At a high level that means establishing the project vision first (you'd be amazed how few companies actually do this), identifying other stakeholders and their goals (legal, architecture, etc.) then breaking things down into feature sets, features and stories. A story is the smallest usable piece of code on which we can get feedback, and it may be associated with one or more scenarios. This is what Chris Matts calls "feature injection" - creating features because they are needed to support stakeholder goals and the project vision. I wrote an article about this a while back. I justify this because regardless of how good or well-tested your code is, it won't matter if it's the wrong code in the first place.
Once we have the story and scenarios, I tend to write the UI first, followed by the classes which support it. I wrote a blog post about a real-life example here - we were programming in Java so you might have to do things a bit differently with Rails, but the principles remain. I tend to start writing unit tests when there's actually behaviour to describe - that is, a class behaves differently depending on its context, on what has already happened before. Normally the first class will indeed be the controller, which I tend to populate with static data just to get the UI into shape. I'll write the first unit tests to help me get rid of that static data.
Doing the UI first lets me get feedback from stakeholders early, since it's the UI that the users will be interacting with. I then start with the "happy path" - the thing which lets the users do the most valuable thing - followed by the exceptional cases, validation, etc.
Then I do my best to persuade my PM to let us release our code early, because it's only when the users actually get hold of it to play with that you find out what you really did wrong.

Resources