How do you label flaky tests using junit? - jenkins

How do I label flaky tests in junit xml syntax that jenkins uses to give us a report? jenkins gives me a nice report of tests that succeeeded and failed. I would like to know which tests are known to be flaky.

Certainly a flaky test is something that needs to be avoided and fixed. Tests should always pass and if not they need to be worked on.
However we don't live in a perfect world and thus it might be helpful to identify tests that have failed every now and then in the past.
Since Jenkins keeps track of test results you can step through the history of a particular test case (using the "Previous/Next build" links).
Additionally there are two Jenkins plugins that might be helpful:
Test Results Analyzer Plugin:
this plugin lets you see the history of a particular test case (or test suite) at one glance (plus adds nice charts)
Flaky Test Handler Plugin: this plugin has deeper support for failing tests (i.e. rerunning them). It's a bit restricted to maven and git though.

You label such tests not helpful. And your primary goal is to eliminate the "flakiness" of such tests - by identifying the root cause of the problem and either fixing production or test code; or both; or worst case by deleting or #Ignore'ing those tests.
Disclaimer of course: jenkins can't tell you about flaky testcases. How could it?!
If you assume that you have flaky testcases, you could spent some time to create a jenkins setup were all tests are run 5, 10, 50 times against the same build output; to then compare statistics. But this is nothing that would come for free - you will have to implement that yourself.

Related

Reports from Jenkins and Jira

We are using Jenkins to run the selenium automation tests and my manager wants to see the list of failed builds and what percentage of the tests passed for the builds. We also have manual tests that get executed in JIRA. I need to combine both and derive the test metrics from them.
The way I think of proceeding is as follows:
Get the Jenkins data in JIRA first using the Jenkins plugin for JIRA.
Use the jira api to collect the testing results from Jenkins and manual tests run on jira.
Prepare a dashboard in JIRA to display all the metrics
Could you suggest if the above approach is correct and suggest something additional.
Thanks in advance!
Are you using cucumber? In that case you could use the cucumber reporting plugin for jenkins. If it doesn't suit your needs but you still use cucumber you can also generate reports in a format like JSON, which you could later parse and get your data.
I have the feeling what you want to do seems a bit complicated, and with not a big benefit. If the tests are failing it's likely you'll have to see what is happening. Having the percentage is sure nice, but I think you can spend some hours/days tailoring this just for having something cute that your manager wants but that has no specific purpose. I would opt for something simpler.
If the automated tests fail, create a jira issue automatically with jenkins. You could put the build number as a tag, or in the title. You can also create it always to indicate that build nr. ## was tested and everything went ok.
As a part of the manual testing process, report in jira what failed.
Create a dashboard and play a bit with tags and search to show which builds failed.
I would suggest AssertThat BDD & Test Management in Jira
Provides end-to-end integration - from features creation to manual and automated tests execution and reporting. Out of the box integration with test automation frameworks through plugins.
The plugin allows to download feature files stored in Jira before the run, execute the test in the usual way and then upload cucumber tests results back to Jira, which gives you a clear view on the testing progress in one place.
More info and usage examples on website https://www.assertthat.com/

Exclude specific paths during triggering TFS team build

I'm configuring continuous integration with TFS 2012. I have one problem need solve.
I need to exclude some paths from triggering builds.
For e.g. I have:
$/Project1
$/Project2
And I want that after each check-in of $/Project1 - build has been triggered. And it must build both $/Project1 and $/Project2.
But after checking in $/Project2 I don't want to trigger a build for that Build Definition.
In Source Settings of Build Definition are only functions "Active" and "Cloaked", but it isn't what I need.
Thanks a lot in advance.
P.S. The worth solution is to add the comment ***NO_CI*** on check-in. It will be great if there is some other way.
Based on the comments, this boils down to an X-Y problem. You can't do what you want to do, but the reason you want to do it is because you're trying to solve the wrong problem.
You're running the UI tests at the wrong point in your dev-test cycle. UI tests should not run during a build, they should run after a release. A change to your test project should absolutely result in a build.
Someone is developing UI tests against code that's not yet in source control, which makes no sense. If someone is writing tests against code, the code should be source controlled.
I'm guessing that someone is manually pushing uncommitted code out to a dev server, which is being used by someone else to write tests. Don't do this. Use a real release management solution so that as developers write code, each check-in is automatically deployed to a dev/QA environment. Then the folks writing the UI tests will have something to test against. What's the point in writing tests against code that's in such a state of flux that the developer responsible for it isn't even sure it's worth being source controlled? That just results in spending a lot of time rewriting tests as the code evolves.
Assuming you set everything up properly, every commit of the application code will result in the current set of tests being run against the latest commit. Every commit of the test code will result in the new set of tests being run against the existing application code. The two things (application code and test code) should be coupled, and should always build together.
And one last thing, mostly opinion: UI tests are awful and serve very little utility. They are brittle, slow, and hard to maintain. I have never seen a comprehensive UI test suite actually provide value. UI tests are best served as a small set of post-release smoke tests. Business logic should be primarily unit tested, with a smaller suite of integration tests to back it up.

Executable requirements, PHPUnit and Jenkins strategy

We use Jenkins and PHPUnit in our development. For long time I wanted to start to use executable requirements in our team. Architect/team leader/someone who defines requirements can write tests before actual code. Actual code then should be written by another team member. Therefore executable requirements tests are committed to repository before actual code is made, Jenkins rebuilds the project and rightfully fails. And project remains in failed state until new code is written which defeats XP rule to keep project in good state at all times.
Is there any way to tell PHPUnit that such and such tests should not be run under Jenkins while they may be executed locally by any dev with ease? Tweaking phpunit.xml is not really desirable: local changes to tests are better as they easier to keep track of.
We tried markTestIncomplete() and markTestSkipped() but they not really do what we need: executable requirements tests are really complete and should not be skipped. Use of these functions prevent easy execution of such tests in development.
The best approach in our case would be to have PHPUnit option like --do-not-run-requirements which should be used by PHPUnit executed by Jenkins. On dev machine this option should not be used and actual executable requirements tests should have #executableRequirements meta-comment in the beginning (removed only after actual code is created and tested). Issue that PHPUnit does not have such functionality.
May be there is a better way to achieve implementation of executable requirements without "false" failures in Jenkins?
With PHPUnit, tests can be filtered for execution. Either annotate tests that should not be executed in one environment using the #group annotation and then use --exclude-group <name-of-group> or the <group> element of PHPUnit's XML configuration file or the --filter <pattern> commandline option. Both approaches are covered in the documentation.
For long time I wanted to start to use Test Driven Development in our
team. I don't see any problem with writing tests before actual code.
This is not TDD.
To quote from wikipedia:
first the developer writes an (initially failing) automated test case
that defines a desired improvement or new function, then produces the
minimum amount of code to pass that test, ...
Notice the test case in the singular.
Having said that, you are quite welcome to define your own development methodology whereby one developer write tests in the plural, commits them to version control and another developer writes code to satisfy the tests.
The solution to your dilemma is to commit the tests to a branch and the other developer work in that branch. Once all the tests are passing, merge with trunk and Jenkins will see the whole lot and give its opinion on whether the tests pass or not.
Just don't call it TDD.
I imagine it would not be very straight forward in practice to write tests without any basic framework. Hence, 'minimum amount of code to pass the test' approach as you suggested is not a bad idea.
Not necessarily a TDD approach
-Who writes the tests? If someone who works with requirements or an QA member writes the tests, you could probably simply write empty tests (so they don't fail). This approach will make sure that the developer will cover all the cases that the other person has thought about. An example test method would be public void testThatObjectUnderTestReturnsXWhenACondition, public void testThatObjectUnderTestReturnsZWhenBCondition. (I like long descriptive names so there are no confusions as to what I am thinking or you can use comments to describe your tests). The DEVs can write code and finish the tests or let someone else finish the tests later. Another way of stating this is to write executable requirements. See Cucumber/Steak/JBehave as executable requirements tools.
Having said above, we need to differentiate whether you are trying to write executable requirements or unit/integration/acceptance tests.
If you want to write executable requirements, any one can write it and could be empty to stop them from failing. DEVs will then fill it up and make sure the requirements are covered. My opinion is to let the DEVs deal with unit /integration/acceptance tests using TDD (actual TDD) and not separate the responsibility of writing code and appropriate unit/integration/acceptance tests for the code they write.

How to quickly find out which tests have been fixed in jenkins

I have inherited a quite rotten test suite with over 1000 tests, where several hundreds are failing. There are several people fixing the tests, and I would like to quickly see which tests have been fixed between test job runs.
Currently, I copy the lists of failed tests from two runs, sort and diff them. I hope there is a better way.
The only thing I could find that mentions a better possibility is this old thread.
Thank you.
Have you considered using Sonar (now SonarQube)? You can pipe test results over to Sonar and keep test history in a searchable format. We use PMD/Findbugs/etc and test results to red/yellow/green a job for further CI deployment, but use Sonar for trends and analysis.

Run JUnit tests in parallel

When running unit tests, Gradle can execute multiple tests in parallel without any changes to the tests themselves (i.e. special annotations, test runners, etc.). I'd like to achieve the same thing with ant, but I'm not sure how.
I've seen this question but none of the answers really appeal to me. They either involve hacks with ant-contrib, special runners set up with the #RunWith annotation, some other special annotations, etc. I'm also aware of TestNG, but I can't make the Eclipse plug-in migrate our tests - and we have around 10,000 of them so I'm not doing it by hand!
Gradle doesn't need any of this stuff, so how do I do it in ant? I guess Gradle uses a special runner, but if so, it's set up as part of the JUnit setup, and not mentioned on every single test. If that's the case, then that's fine. I just don't really want to go and modify c. 10,000 unit tests!
Gradle doesn't use a special JUnit runner in the strict sense of the word. It "simply" has a sophisticated test task that knows how to spin up multiple JVMs, run a subset of test classes in each of them (by invoking JUnit), and report back the results to the JVM that executes the build. There the results get aggregated to make it look like a single-JVM, single-threaded test execution. This even works for builds that define their own test listeners.
To get parallel test execution in Ant, you would need an Ant task that supports this feature (not sure if one exists). An alternative is to import your Ant build into Gradle (ant.importBuild "build.xml") and add a test task on the Gradle side.

Resources