When running unit tests, Gradle can execute multiple tests in parallel without any changes to the tests themselves (i.e. special annotations, test runners, etc.). I'd like to achieve the same thing with ant, but I'm not sure how.
I've seen this question but none of the answers really appeal to me. They either involve hacks with ant-contrib, special runners set up with the #RunWith annotation, some other special annotations, etc. I'm also aware of TestNG, but I can't make the Eclipse plug-in migrate our tests - and we have around 10,000 of them so I'm not doing it by hand!
Gradle doesn't need any of this stuff, so how do I do it in ant? I guess Gradle uses a special runner, but if so, it's set up as part of the JUnit setup, and not mentioned on every single test. If that's the case, then that's fine. I just don't really want to go and modify c. 10,000 unit tests!
Gradle doesn't use a special JUnit runner in the strict sense of the word. It "simply" has a sophisticated test task that knows how to spin up multiple JVMs, run a subset of test classes in each of them (by invoking JUnit), and report back the results to the JVM that executes the build. There the results get aggregated to make it look like a single-JVM, single-threaded test execution. This even works for builds that define their own test listeners.
To get parallel test execution in Ant, you would need an Ant task that supports this feature (not sure if one exists). An alternative is to import your Ant build into Gradle (ant.importBuild "build.xml") and add a test task on the Gradle side.
Related
How do I label flaky tests in junit xml syntax that jenkins uses to give us a report? jenkins gives me a nice report of tests that succeeeded and failed. I would like to know which tests are known to be flaky.
Certainly a flaky test is something that needs to be avoided and fixed. Tests should always pass and if not they need to be worked on.
However we don't live in a perfect world and thus it might be helpful to identify tests that have failed every now and then in the past.
Since Jenkins keeps track of test results you can step through the history of a particular test case (using the "Previous/Next build" links).
Additionally there are two Jenkins plugins that might be helpful:
Test Results Analyzer Plugin:
this plugin lets you see the history of a particular test case (or test suite) at one glance (plus adds nice charts)
Flaky Test Handler Plugin: this plugin has deeper support for failing tests (i.e. rerunning them). It's a bit restricted to maven and git though.
You label such tests not helpful. And your primary goal is to eliminate the "flakiness" of such tests - by identifying the root cause of the problem and either fixing production or test code; or both; or worst case by deleting or #Ignore'ing those tests.
Disclaimer of course: jenkins can't tell you about flaky testcases. How could it?!
If you assume that you have flaky testcases, you could spent some time to create a jenkins setup were all tests are run 5, 10, 50 times against the same build output; to then compare statistics. But this is nothing that would come for free - you will have to implement that yourself.
I have a scenario where there are several independent jmx files, each of them has their own threadgroup etc. Using JMeter Ant script I can fire them all in sequence and collect the result in a jtl file. My challenge here is to do the same thing but fire off the tests in parallel. Please note that Include Controller is not an option since I want to use(or honor) the ThreadGroup and User Defined Variables in each jmx files.
Thanks for your help
Perhaps Parallel Ant Task is what you're looking for.
However <parallel> directive is not thread safe so I wouldn't recommend to use it with JMeter Ant task and consider using i.e. command-line mode, maven plugin or custom Java class which will spawn individual JMeter tests with it.
See 5 Ways To Launch a JMeter Test without Using the JMeter GUI guide for details of the approaches, hope this helps to find the one which matches your environment.
Yes, Ant parallel solves this problem.
Long story short,
I was wondering if anyone ever felt the need for (and knows of any implementation of) the possibility of "instantiating" (OO terminology) a parametrized build.
What I mean is treating a parametrized build as a template, from which many "instances" can be generated.
Each instance is supposed to define a different combination of values for the parameters.
The final goal is twofold:
DRY (which is given simply by the parametrized build concept)
having separate build histories / test reports for each instance (otherwise it would be a mess)
the instances would be schedulable directly in jenkins UI (while a parametrized build is not)
The template would then be used only for:
manual builds
changing the config for all of the instances at once
Now, time for some context, as I may be missing something in my overall approach.
You are welcome to point me in the right direction :)
I have a maven project with a suite of selenium tests that I want jenkins to run.
The suite is parametrized: browser, OS, test environment.
So, I can run it e.g. with mvn test -Dbrowser=chrome -Dplatform=win [..].
I want a separate test report for each combination of my parameters.
As a newbie, my first solution was "Copy existing job".
Quick and dirty. But effective.
As you will know, problems arise when you need to make a change to the configuration of the job, and you want to keep in sync all of these copy&pasted jobs.
Then I found the parametrized build feature.
It's very cool (code reuse/maintainability++), but the test report and the build history is shared among all of the actual builds, therefore I can not rely on them for a tidy reporting like "this test is always failing on IE; but it isn't on chrome", and so on.
Thank you very much in advance
I think what you are describing is the matrix project
There are also selenium plugins, I put one together to work with matrix jobs https://wiki.jenkins-ci.org/display/JENKINS/Selenium+Axis+Plugin
One lack I can see: you can't build a single combination, as the build btn is present only at the "top level".
Have you tried the Matrix Combination plugin
https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Combinations+Plugin
I have a problem of comprehension.
Unit tests are coded by developers in order to test a class (Java).
Integration tests are aimed to know if the different classes work together.
My problem is:
Based on continuous integration: I have Subversion (SVN) linked to Jenkins, and Sonar linked to Jenkins.
How are the integration tests created? Who does them? Are these tests already available in Sonar, or developers have to code them? Sonar launches integration tests thanks to Jenkins? How does it work...?
Integration tests are also coded by developers, to test multiple classes at one time, conceptually a "module", whatever that means in your world.
In my world, unit tests are tests that exercise one class, and have no dependencies externally. We allow file system access for mock data and logging, but that's all.
If a test exercises an actual database, or a running executable somewhere (e.g. web service) it is an integration test. We write them with junit, same as a unit test.
We find it works best for us to have separate Jenkins jobs linked in a pipeline to build, execute unit tests, execute integration tests, and load Sonar. While SonarQube is able to run tests for you, we prefer the separation which allows us to manually execute either set of tests via Jenkins without updating Sonar at the same time.
I am setting up my local build using Ant and have decided to use RabbitMQ. I would like to have any Ant task that I can use to configure my local installation for setting things up (stop, start, create queues etc..) and tearing them down as part of my test suite.
Has anyone come across anything like this?
I described a scenario in this question there the op was looking for a way to declare queues and bindings without the overhead of doing it at runtime.
In my solution I use a console utility to perform the queue declarations and have this called from a build step in my build server when running builds and tests.
During the normal course of coding and integration testing from the IDE, I simply make sure that I have used the utility fairly recently to make sure the queues have been established as per the current XML definitions. My test setups ensure that the queues themselves are empty before running.
Hope this helps.
Steve
Ant is a build tool. While running your automated tests is generally part of a build process, the setup of your queues are part of your specification's context and should be included in your tests. If you truly have a need to configure your exchanges and queues once before all test runs, many frameworks provide a facility to do this.