myBatis code coverage tool - code-coverage

I am going to write code coverage tool for myBatis SQL maps.
Basically, I want to know if each statement has been called during "mvn test". The second step is to compare executed and existing statements and print difference somehow.
I wonder if it already exist because I didn't find?

For those who are interested in. Finally I've implemented this feature.
Just a few facts. I've created framework based on spring-test and junit. The main goal was to test myBatis statements on multiple databases at the same time.
Coverage tool is just a feature to help developer define missing tests.
The way how coverage tool is working could be explained as following steps:
get all statement names from myBatis once using getMappedStatementNames() method from SqlMapExecutorDelegate class.
collect all statement names that was called.
print difference between all and executed to the log.
There are many other improvements to this small test framework. And then all this stuff integrated into automated build system, integrated with Bamboo server etc.
At the end I've got output like this:
WARN 18/04/2012 15:14:50 (MultipleDatabaseRunner.java:108) COVERAGE: Statement Audit.getAudit wasn't tested.
...

Related

Reports from Jenkins and Jira

We are using Jenkins to run the selenium automation tests and my manager wants to see the list of failed builds and what percentage of the tests passed for the builds. We also have manual tests that get executed in JIRA. I need to combine both and derive the test metrics from them.
The way I think of proceeding is as follows:
Get the Jenkins data in JIRA first using the Jenkins plugin for JIRA.
Use the jira api to collect the testing results from Jenkins and manual tests run on jira.
Prepare a dashboard in JIRA to display all the metrics
Could you suggest if the above approach is correct and suggest something additional.
Thanks in advance!
Are you using cucumber? In that case you could use the cucumber reporting plugin for jenkins. If it doesn't suit your needs but you still use cucumber you can also generate reports in a format like JSON, which you could later parse and get your data.
I have the feeling what you want to do seems a bit complicated, and with not a big benefit. If the tests are failing it's likely you'll have to see what is happening. Having the percentage is sure nice, but I think you can spend some hours/days tailoring this just for having something cute that your manager wants but that has no specific purpose. I would opt for something simpler.
If the automated tests fail, create a jira issue automatically with jenkins. You could put the build number as a tag, or in the title. You can also create it always to indicate that build nr. ## was tested and everything went ok.
As a part of the manual testing process, report in jira what failed.
Create a dashboard and play a bit with tags and search to show which builds failed.
I would suggest AssertThat BDD & Test Management in Jira
Provides end-to-end integration - from features creation to manual and automated tests execution and reporting. Out of the box integration with test automation frameworks through plugins.
The plugin allows to download feature files stored in Jira before the run, execute the test in the usual way and then upload cucumber tests results back to Jira, which gives you a clear view on the testing progress in one place.
More info and usage examples on website https://www.assertthat.com/

Executable requirements, PHPUnit and Jenkins strategy

We use Jenkins and PHPUnit in our development. For long time I wanted to start to use executable requirements in our team. Architect/team leader/someone who defines requirements can write tests before actual code. Actual code then should be written by another team member. Therefore executable requirements tests are committed to repository before actual code is made, Jenkins rebuilds the project and rightfully fails. And project remains in failed state until new code is written which defeats XP rule to keep project in good state at all times.
Is there any way to tell PHPUnit that such and such tests should not be run under Jenkins while they may be executed locally by any dev with ease? Tweaking phpunit.xml is not really desirable: local changes to tests are better as they easier to keep track of.
We tried markTestIncomplete() and markTestSkipped() but they not really do what we need: executable requirements tests are really complete and should not be skipped. Use of these functions prevent easy execution of such tests in development.
The best approach in our case would be to have PHPUnit option like --do-not-run-requirements which should be used by PHPUnit executed by Jenkins. On dev machine this option should not be used and actual executable requirements tests should have #executableRequirements meta-comment in the beginning (removed only after actual code is created and tested). Issue that PHPUnit does not have such functionality.
May be there is a better way to achieve implementation of executable requirements without "false" failures in Jenkins?
With PHPUnit, tests can be filtered for execution. Either annotate tests that should not be executed in one environment using the #group annotation and then use --exclude-group <name-of-group> or the <group> element of PHPUnit's XML configuration file or the --filter <pattern> commandline option. Both approaches are covered in the documentation.
For long time I wanted to start to use Test Driven Development in our
team. I don't see any problem with writing tests before actual code.
This is not TDD.
To quote from wikipedia:
first the developer writes an (initially failing) automated test case
that defines a desired improvement or new function, then produces the
minimum amount of code to pass that test, ...
Notice the test case in the singular.
Having said that, you are quite welcome to define your own development methodology whereby one developer write tests in the plural, commits them to version control and another developer writes code to satisfy the tests.
The solution to your dilemma is to commit the tests to a branch and the other developer work in that branch. Once all the tests are passing, merge with trunk and Jenkins will see the whole lot and give its opinion on whether the tests pass or not.
Just don't call it TDD.
I imagine it would not be very straight forward in practice to write tests without any basic framework. Hence, 'minimum amount of code to pass the test' approach as you suggested is not a bad idea.
Not necessarily a TDD approach
-Who writes the tests? If someone who works with requirements or an QA member writes the tests, you could probably simply write empty tests (so they don't fail). This approach will make sure that the developer will cover all the cases that the other person has thought about. An example test method would be public void testThatObjectUnderTestReturnsXWhenACondition, public void testThatObjectUnderTestReturnsZWhenBCondition. (I like long descriptive names so there are no confusions as to what I am thinking or you can use comments to describe your tests). The DEVs can write code and finish the tests or let someone else finish the tests later. Another way of stating this is to write executable requirements. See Cucumber/Steak/JBehave as executable requirements tools.
Having said above, we need to differentiate whether you are trying to write executable requirements or unit/integration/acceptance tests.
If you want to write executable requirements, any one can write it and could be empty to stop them from failing. DEVs will then fill it up and make sure the requirements are covered. My opinion is to let the DEVs deal with unit /integration/acceptance tests using TDD (actual TDD) and not separate the responsibility of writing code and appropriate unit/integration/acceptance tests for the code they write.

Jenkins: files/folders selection from subversion

I'm looking for a jenkins plugin that would allow selection of file(s)/folder(s) for a parameterized build.
The purpose is to be able to select different tests to execute, each test being defined as a .xml file in a svn repo.
eg of repo structure:
tests/business/cars/buy.xml
tests/business/cars/sell.xml
tests/system/core/stuff.xml
I'm not sure if these quite get at what you're looking for, but I found the following two possibilities:
Test In Progress which, based on their GitHub information, should allow you to run tests based on pattern matching and view the progress as they run.
RQM is an IBM tool made to allow you to run specific test cases in a test plan that have a custom property that you configured and provide at test execution time.
edit:
I just found Multi Module Test Publisher, which may be a little closer to what you're looking for. It acts as a replacement for the normal junit plugin and allows you to group junit tests into suites and view statistics on each separately.

No valid coverage data available in Rails stats report

`No valid coverage data available
Tracking and trending code coverage works best when like is compared with like. In this regard it is best to only track builds when all unit tests are passing.
This plugin will not report code coverage until there is at least one stable build.`
what should i do to have a rails stats report
#krs - From what you've said in your comments, you are correct.
rails stats only provides code coverage metrics for tests at that point in time. rails stats does not intend to keep data between builds and report test results vs. code converage.
You can connect your CI to an external service to see test coverage vs. test results. (EG, https://coveralls.io).
Please add coverage.xml file outside the project.It works fine for me.

Free alternatives to Atlassian Clover?

Reasking my older question:
Java test coverage: who covers what?
Background: I look at sonar's coverage report for a class and want to know, which test contributes to the coverage of a specific line / branch, so that it easy to got to that test and add the test for the newly introduced if-branch.
Are there other (preferably free) alternatives to clover in the IDE? Perhaps even such that they can be included into sonar ?
Or maybe tricks to enhance, accumulate information with some scripting in emma-reports ?
Or even further, patch emma or cobertura to log the required info (instead of logging a "1" for counting, one could well log the names of class under test and the test, I assume)
Thanks!
You should definitely give a try to JaCoCo. Its integration with Sonar allows to benefit from new features, for example :
merge coverage by unit and integration tests. See http://www.sonarsource.org/sonar-3-3-in-screenshots/
track the relations between tests and tested code (since sonar 3.5). You can find a screenshot on the documentation page: http://docs.codehaus.org/display/SONAR/Resource+Viewer#ResourceViewer-CoverageTab

Resources