In the following example,
1. <target name="tests.unit">
2. <junit>
3. <batchtest>
4. <fileset dir="tsrc">
5. <include name="**/Test*.java"/>
6. <exclude name="**/tests/*.java"/>
7. </fileset>
8. </batchtest>
9. </junit>
10. </target>
11. <target name="tests.integration">
12. <junit>
13. <batchtest>
14. <fileset dir="tsrc">
15. <include name="**/tests/Test*.java"/>
16. </fileset>
17. </batchtest>
18. </junit>
19. </target>
If i have several TestSuites in in the **/tests/ folder. How will it know which test suite to run first if i run the tests.integration target? If i have TestSuite1.java, TestSuite2.java and TestSuite3.java i would like the test suites to run in the order as specified in teh filename.
You could create a base test class, put the login in its setUp() and inherit all your test cases from this one (and of course, call super.setUp() everywhere).
Note that this would only be a simple login, not a proper unit test. You should unit test your login functionality with all possible crazy user input and whatnot in a separate test class, but for the rest of the test cases you only need a plain simple login with a default username or something.
For those test cases where on top of login you need a product as well, you create a second base test class which extends the first and adds product creation to its setUp().
No code duplication - if login changes, apart from the login test cases themselves you need to change a single method in your test code.
It will probably be slower to execute 5000 unit tests this way - but much, much safer. If you start to depend on the order of execution of your unit tests, you're stepping on a slippery slope. It is very difficult to notice that you have inadvertently introduced an extra dependency between two unit tests if their order is fixed by your configuration. E.g. you set up a specific property of a product or change a global configuration setting in one unit test, then you test something else on your product in the next test case - and it happens to work only because the previous unit test set things up that specific way. This leads to nasty suprises sooner or later.
Unless there are new features in JUnit, this is a difficult thing to do.
TestNG can manage it with dependent groups.
Here's a question for you: Why does the order matter? Unit tests should not have dependencies. If you're doing that, perhaps these are really integration tests.
FitNesse might be a better way to go.
Yes i am trying to create a test suite for a functional test not unit tests. Im trying to use junit to build the functional tests package. I am using selenium which is based on Junit.
Lets say i have a website where you cant do anything without logging on. In this case i have a test case that tests the logging on functionality and then i would have another test case that would test something else. The order they will be executed will matter because i cant test anything before logging on which means the order should be
TestLogin
TestCreateProduct
TestReadProduct
in the above test cases, i cant read any product before it is created and that i have logged and i cant create a product before i have logged on. I have seen a lot of comments about using the setUp() and tearDown() methods but surely that would mean a lot of duplication.
If for example i have to make TestReadProduct test case independent, i would have to put the TestLogin and TestCreateproduct functionality in the setUp() method for the TestCreateProduct test case. Surely this is a maintenance nightmare. Imagine having to maintain 5000 of testcases. I would have to make a lot of changes in a lot of places if the TestLogin functionality changes.
I am thinking of using the "depends" option in ANT.
Something like this
<target=TestReadProduct depends=TestLogin, TestCreateProduct>
isnt there a better way of doing this?
Related
I have a requirement where I have to run a while loop in ant script.
I have to check for the status of a file (it is being created by some other process) in the while loop and perform some task based on it.
I strongly urge you not to use third party tasks that provide looping capability if at all possible. Introducing programming logic such as loops and if statements can easily make your build scripts into unusable spaghetti code.
For your specific case, native Ant already has a much simpler solution. You can use the waitfor task with a nested available condition pointed to the file in question:
<waitfor>
<available file="/path/to/your/file" />
</waitfor>
https://ant.apache.org/manual/Tasks/waitfor.html
https://ant.apache.org/manual/Tasks/available.html
We use Jenkins and PHPUnit in our development. For long time I wanted to start to use executable requirements in our team. Architect/team leader/someone who defines requirements can write tests before actual code. Actual code then should be written by another team member. Therefore executable requirements tests are committed to repository before actual code is made, Jenkins rebuilds the project and rightfully fails. And project remains in failed state until new code is written which defeats XP rule to keep project in good state at all times.
Is there any way to tell PHPUnit that such and such tests should not be run under Jenkins while they may be executed locally by any dev with ease? Tweaking phpunit.xml is not really desirable: local changes to tests are better as they easier to keep track of.
We tried markTestIncomplete() and markTestSkipped() but they not really do what we need: executable requirements tests are really complete and should not be skipped. Use of these functions prevent easy execution of such tests in development.
The best approach in our case would be to have PHPUnit option like --do-not-run-requirements which should be used by PHPUnit executed by Jenkins. On dev machine this option should not be used and actual executable requirements tests should have #executableRequirements meta-comment in the beginning (removed only after actual code is created and tested). Issue that PHPUnit does not have such functionality.
May be there is a better way to achieve implementation of executable requirements without "false" failures in Jenkins?
With PHPUnit, tests can be filtered for execution. Either annotate tests that should not be executed in one environment using the #group annotation and then use --exclude-group <name-of-group> or the <group> element of PHPUnit's XML configuration file or the --filter <pattern> commandline option. Both approaches are covered in the documentation.
For long time I wanted to start to use Test Driven Development in our
team. I don't see any problem with writing tests before actual code.
This is not TDD.
To quote from wikipedia:
first the developer writes an (initially failing) automated test case
that defines a desired improvement or new function, then produces the
minimum amount of code to pass that test, ...
Notice the test case in the singular.
Having said that, you are quite welcome to define your own development methodology whereby one developer write tests in the plural, commits them to version control and another developer writes code to satisfy the tests.
The solution to your dilemma is to commit the tests to a branch and the other developer work in that branch. Once all the tests are passing, merge with trunk and Jenkins will see the whole lot and give its opinion on whether the tests pass or not.
Just don't call it TDD.
I imagine it would not be very straight forward in practice to write tests without any basic framework. Hence, 'minimum amount of code to pass the test' approach as you suggested is not a bad idea.
Not necessarily a TDD approach
-Who writes the tests? If someone who works with requirements or an QA member writes the tests, you could probably simply write empty tests (so they don't fail). This approach will make sure that the developer will cover all the cases that the other person has thought about. An example test method would be public void testThatObjectUnderTestReturnsXWhenACondition, public void testThatObjectUnderTestReturnsZWhenBCondition. (I like long descriptive names so there are no confusions as to what I am thinking or you can use comments to describe your tests). The DEVs can write code and finish the tests or let someone else finish the tests later. Another way of stating this is to write executable requirements. See Cucumber/Steak/JBehave as executable requirements tools.
Having said above, we need to differentiate whether you are trying to write executable requirements or unit/integration/acceptance tests.
If you want to write executable requirements, any one can write it and could be empty to stop them from failing. DEVs will then fill it up and make sure the requirements are covered. My opinion is to let the DEVs deal with unit /integration/acceptance tests using TDD (actual TDD) and not separate the responsibility of writing code and appropriate unit/integration/acceptance tests for the code they write.
I think I'm missing a link somewhere in how microsoft expect TFS and automated testing to work together. TFS allows us to create test cases that have test steps. These can be merged into a variety of test plans. I have this all setup and working as I would expect for manual testing.
I've now moved into automating some of these tests. I have created a new visual studio project, which relates to my test plan. I have created a test class that relates to the test case and planned to create a test method for each test step within the test class, using the ordertest to ensure that the methids are executed in the same order as the test steps.
I'd hoped that I could then link this automation up to the test case so that it could be executed as part of the test plan.
This is when it all goes wrong, It is my understanding that the association panel appears to only hook a test case up to a particular test method, not a test step?
Is my understanding correct?
Have MS missed a trick here and made things a litte too complicated or have I missed something? If I hook things upto a whole test case to a method I lose granulaity of what each is doing.
If each test step was hooked into a test method it would be possible for the assert of the test method to register a pass or fail of the overall test case.
Any help or direction so that I can improve my understanding would be appreciated.
The link is not obvious. In Visual Studio Team Explorer create and run a query to find the test case(s). Open the relevant test case and view the test automation section. On the right hand side of the test automation line there should be an ellipsis, click it and link to the test case.
I view this as pushing an automated test from Visual Studio. Confusingly you cannot pull an automated test into MTM.
You can link only one method to a test case. That one method should cover all the steps written in its associated test case including verification(assertions).
If it is getting impossible to cover all steps in one test method or if your have too many verifications is your test case, then the test case needs to be broken down to smaller test cases and each of the test cases will have one automated method associate with it.
Automation test should work like this. (Not a hard rule though..)
Start -> Do some action -> Verify (Assert) -> Finish
You can write as many assertions as possible, but if first assert fails then test won't proceed further to do other assertions. This is how manual testing also works, ie Test fails even if 1 step out of 100 fails.
For the sake of automation test maintainability it is advisable to add minimum asserts in automation test and easiest way to achieve this is by splitting the test case. Microsoft or other test automation provider works this way only and we don't write test methods for each and every steps. This would make things very complicated.
But yes, you can write reusable methods(not test methods) in your test framework for each steps and call them in your test methods. For example you don't have to write code for a test case step say "Application Login" again and again. You can write your method separately and call that in your test method which is linked to the test case.
When running unit tests, Gradle can execute multiple tests in parallel without any changes to the tests themselves (i.e. special annotations, test runners, etc.). I'd like to achieve the same thing with ant, but I'm not sure how.
I've seen this question but none of the answers really appeal to me. They either involve hacks with ant-contrib, special runners set up with the #RunWith annotation, some other special annotations, etc. I'm also aware of TestNG, but I can't make the Eclipse plug-in migrate our tests - and we have around 10,000 of them so I'm not doing it by hand!
Gradle doesn't need any of this stuff, so how do I do it in ant? I guess Gradle uses a special runner, but if so, it's set up as part of the JUnit setup, and not mentioned on every single test. If that's the case, then that's fine. I just don't really want to go and modify c. 10,000 unit tests!
Gradle doesn't use a special JUnit runner in the strict sense of the word. It "simply" has a sophisticated test task that knows how to spin up multiple JVMs, run a subset of test classes in each of them (by invoking JUnit), and report back the results to the JVM that executes the build. There the results get aggregated to make it look like a single-JVM, single-threaded test execution. This even works for builds that define their own test listeners.
To get parallel test execution in Ant, you would need an Ant task that supports this feature (not sure if one exists). An alternative is to import your Ant build into Gradle (ant.importBuild "build.xml") and add a test task on the Gradle side.
I would like to run a JUnit regression Test Suite from within an ANT build script. The test suite has a known number of failures which we are working towards fixing. I want the build to fail if the number of failing tests increases (or changes at all if that's easier to code) - this value can be hard coded in the ANT script.
I'm looking for details on how to implement this.
Even though I'm a bit late to the party:
Another option would be to use JUnit 4 and annotate the failing tests with #Ignore.
That way they will not run, and not count as failures, but the number of ignored tests will still be printed, reminding you there is work left to do.
If a test fails for the first time, you'll get a regular failure; then you can decide whether to fix or to ignore it.
More information:
http://www.devx.com/Java/Article/31983/1954
The junit task creates and XML file with the failing tests. There's no reason you couldn't use XMLTask to count the number of failures and have a conditional fail if that value is too high.