Integration tests implementation - asp.net-mvc

I have a three tiered app
web app (asp.net mvc for simplicity here),
business services
data repositories
And I know there are four types of integration tests:
top down
bottom up
sandwich (combination of the top two)
big bang
I know I would write big-bang tests just like unit tests but without any mocking so I would employ a backend DB as well...
Questions
I don't know how to write other types of integration tests?
How do I write non-bigbang types of integration tests?
Should integration tests be equal to unit tests meaning same number of tests, but testing without mocks? Or should these tests test something completely different?
Could anybody provide any information how to do this (if at all) or whether it's actually feasible doing it?

I suggest doing these:
unit tests / must not hit any external resource
focused integration tests (I guess that'd be bottom up). You should have code that is very close to the external resource, and the sole responsibility of it is integrating with it. Don't try to unit test those classes, instead do very focused tests that hit the real resource and don't have to deal with the rest of the logic in your system. Keep this integration classes as thin as possible
full system tests (I guess big bang). I mean with the UI and everything (or API if that's your endpoint). Make sure you cover as much as possible with the previous tests, and this is more like simple checks the underlying pieces are hooked appropriately.
Depending on your system, you may or not want to complement 3 with integration tests at the top level of the code, but without involving the UI. Regardless of which option you take, make sure to have more comprehensive coverage through unit & focused integration tests, as testing various behavior at the top level have a complexity level that can get out of control very quickly.
Should integration tests be equal to unit tests meaning same number of tests, but testing without mocks? Or should these tests test something completely different?
As I mentioned in 1 & 2, its best when those test different things. This depends on the system, but I'd usually expect the amount of unit tests to be a few times the amount of integration tests. For full system tests, make sure you have enough so that you can tell all the pieces were hooked correctly, but not so much that it becomes too complex to test each scenario.

Related

How should I test a Rails application with RSpec to get complete test coverage?

When writing specs for a simple Rails app, is the following a correct approach for full test coverage?
Write feature specs for all user stories
Write controller specs to ensure that individual action responses are correct and all required variables are set
Write model specs to ensure all methods, validations,e tc. are working as intended
Write mailer specs
Write routing specs
Is this enough, too much (e.g. can I skip some lower-level specs if I've written feature specs), or not enough? Why?
You don't need to write specs for every object in every layer either to get 100% test coverage or to test-drive (require you to implement) all of the important behavior in your application. Instead, as behavior-driven development (BDD) advises, write specs outside in, and write lower-level specs only as necessary.
The most important measure of test completeness is requirement coverage: it's helpful for each user story, and each detail of each story that requires new code, to be represented in at least one test. If you're following typical agile practices (mentioning user stories suggests that you are) your tests are probably the only place where you record your requirements, so you probably can't put a number on this kind of coverage. It's also helpful to have
line coverage (what most people mean when they say test coverage), meaning that every line of code is exercised by at least one test, and
integration coverage, meaning that every method call from one class to another is exercised by at least one test.
For each story,
Write only the feature specs that will test-drive all of the story's distinct happy paths.
Write additional feature specs to ensure integration coverage of architecturally interesting minor variations of happy paths and of sad paths. For example, I often write three feature specs for a story that involves a form: one where the user fills in every possible field and succeeds, another where the user fills in as little information as possible and still succeeds (ensuring that unspecified values and defaults work as intended), and one where the user makes a mistake, fails, corrects the mistake and succeeds.
At this point you've already test-driven every layer (controllers, models, views, helpers, mailers, etc.) into existence, with only feature specs.
Write model and helper specs to drive out detailed requirements which live entirely in those classes. For example, once you've written a single sad-path feature spec that establishes that entering one particular invalid attribute sends the user back to edit their form submission and displays a message, you can handle other invalid attributes entirely by writing more examples in that model's spec that test that model attributes are validated, and let the architecture that you've already test-driven propagate the errors back to the user.
Note that although your feature specs already test the happy paths through model and helper methods, as soon as you start writing examples for a method for minor or error cases, you'll probably want to write the happy-path example or examples for that method too, so you can see the entire description of the method in one place, and so you can test the method fully just by running all its examples and not also have to run any feature specs.
You might not need some kinds of specs at all:
Well-factored controller actions are short and have few or no conditionals, so you often won't need any controller specs at all. Write them only when needed, and stub out model, mailer, etc. behavior to keep them simple and fast.
Similarly, views and mailers should have few or no conditionals (complex code should be refactored into helper and model methods), so you often won't need view or mailer specs at all.
Your feature specs will have test-driven all the routes you need, so you probably won't need routing specs. I've only ever gotten use out of routing specs when I had to do a major refactor of routes, as when upgrading from one major version of Rails to the next.
As long as you always write a test before you write new code, you'll always have 100% line coverage.
That testing strategy sounds really comprehensive. If you had all of these tests in place you would have great test coverage. However it would take you longer to deliver your project. You would also not be agile as someone who is doing more limited testing. Testing has to suit the project. Don't over test. Over testing can cost time and money. Don't under test. Under testing can cost time and money.
There are right ways to do unit testing. There are right ways to do integration testing. The glove has to fit. If your application is largely front end facing then perhaps it's best to start with integration tests. If your writing a back end application or perhaps an API then unit tests maybe a better place to start. I think approaching with one style of testing and then expanding to different styles is a better start than to try and test every layer of your application.
Why not start with simple unit tests? They are easy to write. Write these tests and then track how many bugs you ship. Are you letting in too many bugs? Are you having a lot of regression issues? Are there bugs that are getting through to production that your suite is not picking up? If the answer is yes then maybe it's time to write some higher level tests. Remember the higher level a test is the more development cost you will have to pay.
If your not shipping bugs then you have no reason to write any more tests. Remember the end goal here. We want to ship bug free code. If we can write one test and one test alone that will ensure we are doing this then there is no reason to test any further.

Testing: how to focus on behavior instead of implementation without losing speed?

It seems, that there are two totally different approaches to testing, and I would like to cite both of them.
The thing is, that those opinions were stated 5 years ago (2007), and I am interested, what has changed since then and which way should I go.
Brandon Keepers:
The theory is that tests are supposed to be agnostic of the
implementation. This leads to less brittle tests and actually tests
the outcome (or behavior).
With RSpec, I feel like the common approach of completely mocking your
models to test your controllers ends up forcing you to look too much
into the implementation of your controller.
This by itself is not too bad, but the problem is that it peers too
much into the controller to dictate how the model is used. Why does it
matter if my controller calls Thing.new? What if my controller decides
to take the Thing.create! and rescue route? What if my model has a
special initializer method, like Thing.build_with_foo? My spec for
behavior should not fail if I change the implementation.
This problem gets even worse when you have nested resources and are
creating multiple models per controller. Some of my setup methods end
up being 15 or more lines long and VERY fragile.
RSpec’s intention is to completely isolate your controller logic from
your models, which sounds good in theory, but almost runs against the
grain for an integrated stack like Rails. Especially if you practice
the skinny controller/fat model discipline, the amount of logic in the
controller becomes very small, and the setup becomes huge.
So what’s a BDD-wannabe to do? Taking a step back, the behavior that I
really want to test is not that my controller calls Thing.new, but
that given parameters X, it creates a new thing and redirects to it.
David Chelimsky:
It’s all about trade-offs.
The fact that AR chooses inheritance rather than delegation puts us in
a testing bind – we have to be coupled to the database OR we have to
be more intimate with the implementation. We accept this design choice
because we reap benefits in expressiveness and DRY-ness.
In grappling with the dilemma, I chose faster tests at the cost of
slightly more brittle. You’re choosing less brittle tests at the cost
of them running slightly slower. It’s a trade-off either way.
In practice, I run the tests hundreds, if not thousands, of times a
day (I use autotest and take very granular steps) and I change whether
I use “new” or “create” almost never. Also due to granular steps, new
models that appear are quite volatile at first. The valid_thing_attrs
approach minimizes the pain from this a bit, but it still means that
every new required field means that I have to change
valid_thing_attrs.
But if your approach is working for you in practice, then its good! In
fact, I’d strongly recommend that you publish a plugin with generators
that produce the examples the way you like them. I’m sure that a lot
of people would benefit from that.
Ryan Bates:
Out of curiosity, how often do you use mocks in your tests/specs?
Perhaps I'm doing something wrong, but I'm finding it severely
limiting. Since switching to rSpec over a month ago, I've been doing
what they recommend in the docs where the controller and view layers
do not hit the database at all and the models are completely mocked
out. This gives you a nice speed boost and makes some things easier,
but I'm finding the cons of doing this far outweigh the pros. Since
using mocks, my specs have turned into a maintenance nightmare. Specs
are meant to test the behavior, not the implementation. I don't care
if a method was called I just want to make sure the resulting output
is correct. Because mocking makes specs picky about the
implementation, it makes simple refactorings (that don't change the
behavior) impossible to do without having to constantly go back and
"fix" the specs. I'm very opinionated about what a spec/tests should
cover. A test should only break when the app breaks. This is one
reason why I hardly test the view layer because I find it too rigid.
It often leads to tests breaking without the app breaking when
changing little things in the view. I'm finding the same problem with
mocks. On top of all this, I just realized today that mocking/stubbing
a class method (sometimes) sticks around between specs. Specs should
be self contained and not influenced by other specs. This breaks that
rule and leads to tricky bugs. What have I learned from all this? Be
careful where you use mocking. Stubbing is not as bad, but still has
some of the same issues.
I took the past few hours and removed nearly all mocks from my specs.
I also merged the controller and view specs into one using
"integrate_views" in the controller spec. I am also loading all
fixtures for each controller spec so there's some test data to fill
the views. The end result? My specs are shorter, simpler, more
consistent, less rigid, and they test the entire stack together
(model, view, controller) so no bugs can slip through the cracks. I'm
not saying this is the "right" way for everyone. If your project
requires a very strict spec case then it may not be for you, but in my
case this is worlds better than what I had before using mocks. I still
think stubbing is a good solution in a few spots so I'm still doing
that.
I think all three opinions are still completely valid. Ryan and I were struggling with the maintainability of mocking, while David felt the maintenance tradeoff was worth it for the increase in speed.
But these tradeoffs are symptoms of a deeper problem, which David alluded to in 2007: ActiveRecord. The design of ActiveRecord encourages you to create god objects that do too much, know too much about the rest of the system, and have too much surface area. This leads to tests that have too much to test, know too much about the rest of the system, and are either too slow or brittle.
So what's the solution? Separate as much of your application from the framework as possible. Write lots of small classes that model your domain and don't inherit from anything. Each object should have limited surface area (no more than a few methods) and explicit dependencies passed in through the constructor.
With this approach, I've only been writing two types of tests: isolated unit tests, and full-stack system tests. In the isolation tests, I mock or stub everything that is not the object under test. These tests are insanely fast and often don't even require loading the whole Rails environment. The full stack tests exercise the whole system. They are painfully slow and give useless feedback when they fail. I write as few as necessary, but enough to give me confidence that all my well-tested objects integrate well.
Unfortunately, I can't point you to an example project that does this well (yet). I talk a little about it in my presentation on Why Our Code Smells, watch Corey Haines' presentation on Fast Rails Tests, and I highly recommend reading Growing Object Oriented Software Guided by Tests.
Thanks for compiling the quotes from 2007. It is fun to look back.
My current testing approach is covered in this RailsCasts episode which I have been quite happy with. In summary I have two levels of tests.
High level: I use request specs in RSpec, Capybara, and VCR. Tests can be flagged to execute JavaScript as necessary. Mocking is avoided here because the goal is to test the entire stack. Each controller action is tested at least once, maybe a few times.
Low level: This is where all complex logic is tested - primarily models and helpers. I avoid mocking here as well. The tests hit the database or surrounding objects when necessary.
Notice there are no controller or view specs. I feel these are adequately covered in request specs.
Since there is little mocking, how do I keep the tests fast? Here are some tips.
Avoid excessive branching logic in the high level tests. Any complex logic should be moved to the lower level.
When generating records (such as with Factory Girl), use build first and only switch to create when necessary.
Use Guard with Spork to skip the Rails startup time. The relevant tests are often done within a few seconds after saving the file. Use a :focus tag in RSpec to limit which tests run when working on a specific area. If it's a large test suite, set all_after_pass: false, all_on_start: false in the Guardfile to only run them all when needed.
I use multiple assertions per test. Executing the same setup code for each assertion will greatly increase the test time. RSpec will print out the line that failed so it is easy to locate it.
I find mocking adds brittleness to the tests which is why I avoid it. True, it can be great as an aid for OO design, but in the structure of a Rails app this doesn't feel as effective. Instead I rely heavily on refactoring and let the code itself tell me how the design should go.
This approach works best on small-medium size Rails applications without extensive, complex domain logic.
Great questions and great discussion. #ryanb and #bkeepers mention that they only write two types of tests. I take a similar approach, but have a third type of test:
Unit tests: isolated tests, typically, but not always, against plain ruby objects. My unit tests don't involve the DB, 3rd party API calls, or any other external stuff.
Integration tests: these are still focused on testing one class; the differences is that they integrate that class with the external stuff I avoid in my unit tests. My models will often have both unit tests and integration tests, where the unit tests focus in the pure logic that can be tested w/o involving the DB, and the integration tests will involve the DB. In addition, I tend to test 3rd party API wrappers with integration tests, using VCR to keep the tests fast and deterministic, but letting my CI builds make the HTTP requests for real (to catch any API changes).
Acceptance tests: end-to-end tests, for an entire feature. This isn't just about UI testing via capybara; I do the same in my gems, which may not have an HTML UI at all. In those cases, this exercises whatever the gem does end-to-end. I also tend to use VCR in these tests (if they make external HTTP requests), and like in my integration tests, my CI build is setup to make the HTTP requests for real.
As far as mocking goes, I don't have a "one size fits all" approach. I've definitely overmocked in the past, but I still find it to be a very useful technique, especially when using something like rspec-fire. In general, I mock collaborators playing roles freely (particularly if I own them, and they are service objects) and try to avoid it in most other cases.
Probably the biggest change to my testing over the last year or so has been inspired by DAS: whereas I used to have a spec_helper.rb that loads the entire environment, now I explicitly load just the class-under test (and any dependencies). Besides the improved test speed (which does make a huge difference!) it helps me identify when my class-under-test is pulling in too many dependencies.

How to build dependent tests for regression testing

I have an ASP.Net MVC project and I thought I could use a tool like MS Test or NUnit to perform regression testing from the controller layer down to the database, however I hit an issue where tests are not designed to run in order (You can use ordered tests in MS Test, but the tests still run concurrently) and the other problem is how to allow the data created from one test accessible to another?
I have looked at Selenium and WatiN but I just wanted to write something that is not dependent on the UI layer which is most likely going to change an increase the amount of work to maintain the tests.
Any suggestions? Is it just the wrong tool for the job? Should I just use Selenium/WatiN?
Tests should always be independent of each other, so that running order doesn't matter. If your tests depend on other tests you are losing control of what you are testing.
WatiN, and I'm assuming Selenium, won't solve your ordering problem. I use WatiN and NUnit for UI automation and the running order is not guaranteed, which initially posed similar problems to what you're seeing.
In the vein of what dskh answered, you want independent tests, and I've done this in two ways for Integration / Regression black-ish box testing.
First: In your test setup, have any precondition data values setup so you're at a known "good state". For system regression test automation, I've got a number of database scripts that get called to reset data to a known state; this adds some dependencies so be conscious of the design. Note: In straight unit testing, look at using mock objects to take out dependencies and get your test to be "testing one thing". Mock objects, stubbing method calls, etc is the way to go if you can, which based on your question sounds likely.
Second: For cases where certain things absolutely had to be setup in a certain way, and scripting them to test setup added a ridiculous amount of necessary system internal knowledge (eg: all users setup + all permissions setup + etc etc etc) a small number of "bootstrap" tests were setup, in their own namespace to allow easy running via NUnit, to bootstrap the system. Keeping the number of tests small and making sure the tests were very stable was paramount. Now, on a fresh install, the bootstrap tests are run first and serve as advanced smoke tests; no further tests are run if any of the bootstrap tests fail. It is clunky in some ways, but the alternatives were clunkier or more time/resource/whatever consuming.
Update
The link below (and I assume the project) is dead.
Best option maybe using Selenium and the Page Object Model.
See here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089
Old Answer
The simplest solution I have found is Rob Conery's Qixote:
https://github.com/robconery/Quixote
It works by firing http requests and consuming responses.
Simple to set up and use and provides integration testing. This tool also allows a series of tests to be executed in order to create test dependencies.

How to assure I am testing everything, I have all the features and only those features for old code?

We are running a pretty big website, we have some critical legacy code there we want to have well covered.
At the same time we would like to have a report of the features we are currently supporting and covered. And also we want to be sure we really cover every possible corner case. Some code paths are critical and will need many more tests even after achieving 100% coverage.
As we are already using rspec and rspec has "feature" and "scenario" keywords, we tried to make a list using rspec rather than going for cucumber but I think this question can be applied to any testing tool.
We want something like this:
feature "each advertisement will be shown a specified % of impressions"
scenario "As ..."
This feature is minimal from the point of view of managers but huge in the code. It involves a backend tool, a periodic task, logic in the models and views in backend and front end.
We tried to divide it like this:
feature "each creative will be shown a specified % of impressions"
context "configuration"
context "display"
scenario "..."
context "models"
it "should ..."
context "frontend"
context "display"
scenario "..."
context "models"
it "should ..."
Configuration takes place in another tool, display would contain integration tests and models would contain unit test.
I repeat myself but the idea is sto assure that the feature is really finished(including building the configuration tool) and 100% tested.
But looking at this file, it is not integration, nor unit test not even belong to any particular project.
Definitely there should be a better way of managing this.
Any experiences, resources, ideas you can share to guide us ?
The scenario you're describing is a huge reason why BDD is so popular. It forces you to write code in a way that's easy to test. Having said that, you're obviously not going to go back and rewrite the entire legacy application. There are a few things you should consider though:
As you go through each section of the application, you should ask yourself 'Will it be harder to refactor than to write tests for this?'. Sometimes refactoring before writing tests just cannot be avoided.
Testing isn't about 100% coverage, it's about 100% confidence. As you mentioned, you plan on writing more tests even when you have 100% coverage. This is because you're going for confidence. Once you're confident in a piece of code, move on. You can always come back to it at a later time.
From my experience, Cucumber is easier for tests that cover a large portion of the application. I think the reason for this is that writing out the tests in plain english makes you think of things you wouldn't have otherwise. It also allows you to focus on the behavior instead of the code and can make refactoring a less daunting task.
You don't really get much out of adding tests to existing code if you never touch that code again. Start with testing the code you want to make changes (i.e. refactor) to first.
I also recommend the book Rails Test Prescriptions, specifically one of the last chapters called "Testing a Legacy Application".

BDD with Cucumber and rspec - when is this redundant?

A Rails/tool specific version of: How deep are your unit tests?
Right now, I currently write:
Cucumber features (integration tests) - these test against the HTML/JS that is returned by our app, but sometimes also tests other things, like calls to third-party services.
RSpec controller tests (functional tests), originally only if the controllers have any meaningful logic, but now more and more.
RSpec model tests (unit tests)
Sometimes this is entirely necessary; it is necessary to test behavior in the model that is not entirely obvious or visible to the end-user. When models are complex, they should definitely be tested. But other times, it seems to me the tests are redundant. For instance, do you test method foo if it is only called by bar, and bar is tested? What if bar is a simple helper method on a model that is used by and easily testable in a Cucumber feature? Do you test the method in rspec as well as Cucumber? I find myself struggling with this, as writing more tests take time and maintaining multiple "versions" of what is effectively the same behaviors, which makes maintaining the test suite more time intensive, which in turn makes changes more expensive.
In short, do you believe there is there a time when writing only Cucumber features is enough? Or should you always test at every level? If you think there is a grey area, what is your threshold for "this needs a functional/unit test." In practical terms, what do you do currently, and why (or why not) do you think it's sufficient?
EDIT: Here's an example of what might be "test overkill." Admittedly, I was able to write this pretty quickly, but it was completely hypothetical.
Good question, one I've grappled with recently while working on a Rails app, also using Cucumber/RSpec. I try to test as much as possible at every level, however, I've also found that as the codebase grows, I sometimes feel I'm repeating myself needlessly.
Using "Outside-in" testing, my process usually goes something like: Cucumber Scenario -> Controller Spec -> Model Spec. More and more I find myself skipping over the controller specs as the cucumber scenarios cover much of their functionality. I usually go back and add the controller specs, but it can feel like a bit of a chore.
One step I take regularly is to run rcov on my cucumber features with rake cucumber:rcov and look for notable gaps in coverage. These are areas of the code I make sure to focus on so they have decent coverage, be it unit or integration tests.
I believe models/libs should be unit tested extensively, right off the bat, as it is the core business logic. It needs to work in isolation, outside of the normal web request/response process. For example, if I'm interacting with my app through the Rails console, I'm working directly with the business logic and I want the reassurance that methods I call on my models/classes are well tested.
At the end of the day, every app is different and I think it's down to the developer(s) to determine how much test coverage should be devoted to different parts of the codebase and find the right balance so that your test suite doesn't bog you down as your app grows.
Here's an interesting article I dug up from my bookmarks that is worth reading:
http://webmozarts.com/2010/03/15/why-not-to-want-100-code-coverage/
Rails has a well-tested codebase, so I'd avoid re-testing stuff that is covered in those steps.
For example, unless it is custom code, it is pointless to test the results of validations at unit and functional levels. I'd test them at the integration level though. Cucumber features act as specifications for your project, so it is good to specify that you need a validation for x and y, even if the implementation is a single line declaration in the model.
You usually don't want to have both Cucumber stories and RSpec controller specs/integration tests. Pick one (generally Cucumber is the better choice, except for certain special cases). Then use RSpec for your models, and that's all you need.
I test complex model/lib methods with rspec then the main business logic in web with cucumber, so I'm sure that the main features of the web will work 100%, then if I got more time and resources I test everything else.
Its easier to write simple specs for simple methods. Its much easier than writing cukes.
If you keep your methods simple - and keep your specs simple - by testing only the logic inside that method - you will find joy in unit testing.
If anything is redundant its cucumber tests. If you have well tested models and lib your software should work.
The purpose of Cucumber is not to run integration tests. Cucumber, an in general BDD, works as a communication platform, a way to improve the "talk" inside a big team of developers an non-developers that are developing big and complex software. BDD is very useful to communicate a model an its domain at the same level to everybody in the team, even if they don't know anything about computers.
If that is not your scenario, don't use cucumber, because you don't need it. Use rspec and capybara to test your JS code and your integration tests.

Resources