How to show how many assertions were performed during an execution of tests in maven - maven-3

I have a lot of tests using JUnit 4 and Maven in Java. And im looking for a way to show how many assertions an specific test or the complete execution performed, is there a way to have that?
Currently im using "org.junit.Assert." and "org.assertj.core.api.Assertions." to perform my assertions in my tests.
Thanks in advance!

I can't speak to AssertJ, but JUnit does not keep track of how many times the assert methods are called.
Keeping track of how many assertions have been made isn't a useful metric. You could have useful tests with no assertions (if you use #Test(expected = SomeException.class) and you can have not so useful tests with many assertions.

Related

How should I test a Rails application with RSpec to get complete test coverage?

When writing specs for a simple Rails app, is the following a correct approach for full test coverage?
Write feature specs for all user stories
Write controller specs to ensure that individual action responses are correct and all required variables are set
Write model specs to ensure all methods, validations,e tc. are working as intended
Write mailer specs
Write routing specs
Is this enough, too much (e.g. can I skip some lower-level specs if I've written feature specs), or not enough? Why?
You don't need to write specs for every object in every layer either to get 100% test coverage or to test-drive (require you to implement) all of the important behavior in your application. Instead, as behavior-driven development (BDD) advises, write specs outside in, and write lower-level specs only as necessary.
The most important measure of test completeness is requirement coverage: it's helpful for each user story, and each detail of each story that requires new code, to be represented in at least one test. If you're following typical agile practices (mentioning user stories suggests that you are) your tests are probably the only place where you record your requirements, so you probably can't put a number on this kind of coverage. It's also helpful to have
line coverage (what most people mean when they say test coverage), meaning that every line of code is exercised by at least one test, and
integration coverage, meaning that every method call from one class to another is exercised by at least one test.
For each story,
Write only the feature specs that will test-drive all of the story's distinct happy paths.
Write additional feature specs to ensure integration coverage of architecturally interesting minor variations of happy paths and of sad paths. For example, I often write three feature specs for a story that involves a form: one where the user fills in every possible field and succeeds, another where the user fills in as little information as possible and still succeeds (ensuring that unspecified values and defaults work as intended), and one where the user makes a mistake, fails, corrects the mistake and succeeds.
At this point you've already test-driven every layer (controllers, models, views, helpers, mailers, etc.) into existence, with only feature specs.
Write model and helper specs to drive out detailed requirements which live entirely in those classes. For example, once you've written a single sad-path feature spec that establishes that entering one particular invalid attribute sends the user back to edit their form submission and displays a message, you can handle other invalid attributes entirely by writing more examples in that model's spec that test that model attributes are validated, and let the architecture that you've already test-driven propagate the errors back to the user.
Note that although your feature specs already test the happy paths through model and helper methods, as soon as you start writing examples for a method for minor or error cases, you'll probably want to write the happy-path example or examples for that method too, so you can see the entire description of the method in one place, and so you can test the method fully just by running all its examples and not also have to run any feature specs.
You might not need some kinds of specs at all:
Well-factored controller actions are short and have few or no conditionals, so you often won't need any controller specs at all. Write them only when needed, and stub out model, mailer, etc. behavior to keep them simple and fast.
Similarly, views and mailers should have few or no conditionals (complex code should be refactored into helper and model methods), so you often won't need view or mailer specs at all.
Your feature specs will have test-driven all the routes you need, so you probably won't need routing specs. I've only ever gotten use out of routing specs when I had to do a major refactor of routes, as when upgrading from one major version of Rails to the next.
As long as you always write a test before you write new code, you'll always have 100% line coverage.
That testing strategy sounds really comprehensive. If you had all of these tests in place you would have great test coverage. However it would take you longer to deliver your project. You would also not be agile as someone who is doing more limited testing. Testing has to suit the project. Don't over test. Over testing can cost time and money. Don't under test. Under testing can cost time and money.
There are right ways to do unit testing. There are right ways to do integration testing. The glove has to fit. If your application is largely front end facing then perhaps it's best to start with integration tests. If your writing a back end application or perhaps an API then unit tests maybe a better place to start. I think approaching with one style of testing and then expanding to different styles is a better start than to try and test every layer of your application.
Why not start with simple unit tests? They are easy to write. Write these tests and then track how many bugs you ship. Are you letting in too many bugs? Are you having a lot of regression issues? Are there bugs that are getting through to production that your suite is not picking up? If the answer is yes then maybe it's time to write some higher level tests. Remember the higher level a test is the more development cost you will have to pay.
If your not shipping bugs then you have no reason to write any more tests. Remember the end goal here. We want to ship bug free code. If we can write one test and one test alone that will ensure we are doing this then there is no reason to test any further.

Integration testing for a background job in Rails

I'm building an application that has two parts.
The first one is periodically fetching a data from an external resource and populates a database.
The second one is displaying it at a user request.
It is very easy to cover the second part with integration tests. The question is, how do I write integration tests, namely Cucumber tests, for the first part? Do I need to write integration tests for this? Or unit tests are enough?
Thanks.
I saw a similar sample in the book.
The cucumber book
You may take a peak into the source code provided
Chapter 9. Dealing with Message Queues and Asynchronous
Components

How to build dependent tests for regression testing

I have an ASP.Net MVC project and I thought I could use a tool like MS Test or NUnit to perform regression testing from the controller layer down to the database, however I hit an issue where tests are not designed to run in order (You can use ordered tests in MS Test, but the tests still run concurrently) and the other problem is how to allow the data created from one test accessible to another?
I have looked at Selenium and WatiN but I just wanted to write something that is not dependent on the UI layer which is most likely going to change an increase the amount of work to maintain the tests.
Any suggestions? Is it just the wrong tool for the job? Should I just use Selenium/WatiN?
Tests should always be independent of each other, so that running order doesn't matter. If your tests depend on other tests you are losing control of what you are testing.
WatiN, and I'm assuming Selenium, won't solve your ordering problem. I use WatiN and NUnit for UI automation and the running order is not guaranteed, which initially posed similar problems to what you're seeing.
In the vein of what dskh answered, you want independent tests, and I've done this in two ways for Integration / Regression black-ish box testing.
First: In your test setup, have any precondition data values setup so you're at a known "good state". For system regression test automation, I've got a number of database scripts that get called to reset data to a known state; this adds some dependencies so be conscious of the design. Note: In straight unit testing, look at using mock objects to take out dependencies and get your test to be "testing one thing". Mock objects, stubbing method calls, etc is the way to go if you can, which based on your question sounds likely.
Second: For cases where certain things absolutely had to be setup in a certain way, and scripting them to test setup added a ridiculous amount of necessary system internal knowledge (eg: all users setup + all permissions setup + etc etc etc) a small number of "bootstrap" tests were setup, in their own namespace to allow easy running via NUnit, to bootstrap the system. Keeping the number of tests small and making sure the tests were very stable was paramount. Now, on a fresh install, the bootstrap tests are run first and serve as advanced smoke tests; no further tests are run if any of the bootstrap tests fail. It is clunky in some ways, but the alternatives were clunkier or more time/resource/whatever consuming.
Update
The link below (and I assume the project) is dead.
Best option maybe using Selenium and the Page Object Model.
See here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089
Old Answer
The simplest solution I have found is Rob Conery's Qixote:
https://github.com/robconery/Quixote
It works by firing http requests and consuming responses.
Simple to set up and use and provides integration testing. This tool also allows a series of tests to be executed in order to create test dependencies.

Integration tests implementation

I have a three tiered app
web app (asp.net mvc for simplicity here),
business services
data repositories
And I know there are four types of integration tests:
top down
bottom up
sandwich (combination of the top two)
big bang
I know I would write big-bang tests just like unit tests but without any mocking so I would employ a backend DB as well...
Questions
I don't know how to write other types of integration tests?
How do I write non-bigbang types of integration tests?
Should integration tests be equal to unit tests meaning same number of tests, but testing without mocks? Or should these tests test something completely different?
Could anybody provide any information how to do this (if at all) or whether it's actually feasible doing it?
I suggest doing these:
unit tests / must not hit any external resource
focused integration tests (I guess that'd be bottom up). You should have code that is very close to the external resource, and the sole responsibility of it is integrating with it. Don't try to unit test those classes, instead do very focused tests that hit the real resource and don't have to deal with the rest of the logic in your system. Keep this integration classes as thin as possible
full system tests (I guess big bang). I mean with the UI and everything (or API if that's your endpoint). Make sure you cover as much as possible with the previous tests, and this is more like simple checks the underlying pieces are hooked appropriately.
Depending on your system, you may or not want to complement 3 with integration tests at the top level of the code, but without involving the UI. Regardless of which option you take, make sure to have more comprehensive coverage through unit & focused integration tests, as testing various behavior at the top level have a complexity level that can get out of control very quickly.
Should integration tests be equal to unit tests meaning same number of tests, but testing without mocks? Or should these tests test something completely different?
As I mentioned in 1 & 2, its best when those test different things. This depends on the system, but I'd usually expect the amount of unit tests to be a few times the amount of integration tests. For full system tests, make sure you have enough so that you can tell all the pieces were hooked correctly, but not so much that it becomes too complex to test each scenario.

Should I mock my model in rails controller tests?

I am finding holes in my coverage because I have been mocking my models in controller examples. When I remove a model's method upon which a controller depends, I do not get a failure.
Coming from TDD in statically typed languages, I would always mock dependencies to the object under test that hit the database to increase speed. I would still get failures in the above example, since my mocks subclassed the original object. I am looking for best practices in a dynamic language.
Thanks.
UPDATE:
After getting a lot of conflicting opinions on this, it seems it boils down to which philosophy you buy into.
The Rspec community appears to embrace heavily stubbing dependencies to achieve isolation of the object under test. Acceptance tests (traditionally known as integration tests ;) are used to ensure your objects work with their runtime dependencies.
The shoulda / Test::Unit community appears to stay away from stubbing as much as possible. This allows your tests to confirm your object under test actually works with its dependencies.
This video summarizes this nicely: http://vimeo.com/3296561
Yes, in your controller examples, mock your models. In your model examples, test your models.
If you're using Mocha, the following should do it.
Mocha::Configuration.prevent(:stubbing_non_existent_method)
While writing unit tests, the whole aim should be test that unit only. Consider Model as one unit and cover it separately. Change in Model should not directly impact unit test coverage of controller.

Resources