I have a project with over 1000 unit test and was thinking to speed up the build by using the xctool's parallelise option.
So i turned that on and set logicTestBucketSize to 50. The test run, but some are failing which are not failing when not using this option.
My question: are buckets run independently in their own sandbox or do they share global variable that the unit test might set up? which might explain some cross contamination between the tests
Yes. When running tests in parallel, xctool will run each bucket of tests in a single process, and run multiple buckets simultaneously in different processes. Additionally, you can select whether bucketing will be done on a case or class basis with -bucketBy class. You should probably use class unless you have very large test classes with many test cases.
Your tests may fail now though it didn't before because:
A test case relies on global state set up by a previous test case, even from a different test class, as long as it is grouped into the same binary. This test would now fail as the order the tests run in may be different, or not run at all.
A test alters global state and cause later tests to fail. This may not have been a problem before because that test was run after other tests that might be affected have already ran.
A good way of dealing with the first type of failures is run with bucket size 1 (either bucket-by-class mode or bucket-by-case mode, depending on what mode you'll be running later).
Related
I have google test based test suite. Since the tests manipulate the filesystem and do other things that I don't want to be left behind in case of a test crash, besides just not playing nicely with running tests on parallel, I want to run each test case in a new container. I am currently using CTest (aka. CMake test) to run the gtest binary, but I am not very attached to either of these, so if the best option is some other tool, I can accept that.
Can anyone suggest a way to automate this? Right now I am adding each individual test case manually to CTest with a call to docker run as part of the test command, but it is brittle and time consuming. Maybe I am doing this wrong?
You can run your GTest runner with --gtest_list_tests to list all tests.
You can then loop through this list and call your GTest runner with --gtest_filter set to the name of specific test.
The format of the list is a bit awkward to parse though, so need some shell scripting skills to get the actual test names.
Check the exit code of the GTest runner the know whether the test succeeded or failed.
I do not know if this integrates well with CTest.
As stated I'm testing a React Native app with Detox (and Jest) and I'd like to have several e2e files with different purposes -e.g.: login, fill a form and so- and run them in a specific order (the log in e2e file should go first). Running them in random order wouldn't do the job.
The goal is to avoid having one huge file.
Note: I'm running the tests on iOS simulator.
Short answer: you can't, but keep reading.
Jest's conceptual model is that each test file is a unit and is totally isolated from the others. It makes things easier to reason about and it allows parallelisation. If your tests need to be run in a specific order then they are logically a single unit so need to be specified in a single test file.
However that doesn't preclude you from splitting your tests into several files. You can have one file which Jest recognises (e.g. full-suite.e2e.js) and have that file include several others (e.g. login.js, forms.js, etc.). That way Jest runs everything as one file, in your specified order, yet you can organise your individual tests in a way that makes logical sense to you.
In Bazel, you can re-run a test multiple times with:
bazel test --runs_per_test=<n> <target>
This is useful to reproduce conditions that lead a flaky test to fail.
The shortcoming of this approach, though, is that the tests will be run n times regardless of the results.
For tests that are flaky in very rare conditions, this means that you have to set n high, but this means you may have to scroll through a lot of text before you find the failing test's output.
Is there a built-in way in Bazel to run a test until it fails? Right now I'm using a while loop in Bash, which is good enough for my use case but not portable:
while bazel test --test_output=errors -t- <target_name>; do :; done
Passing --notest_keep_going will cause Bazel to exit as soon as a test fails. So, you could use --notest_keep_going in combination with an absurdly high --runs_per_test.
We use Jenkins and PHPUnit in our development. For long time I wanted to start to use executable requirements in our team. Architect/team leader/someone who defines requirements can write tests before actual code. Actual code then should be written by another team member. Therefore executable requirements tests are committed to repository before actual code is made, Jenkins rebuilds the project and rightfully fails. And project remains in failed state until new code is written which defeats XP rule to keep project in good state at all times.
Is there any way to tell PHPUnit that such and such tests should not be run under Jenkins while they may be executed locally by any dev with ease? Tweaking phpunit.xml is not really desirable: local changes to tests are better as they easier to keep track of.
We tried markTestIncomplete() and markTestSkipped() but they not really do what we need: executable requirements tests are really complete and should not be skipped. Use of these functions prevent easy execution of such tests in development.
The best approach in our case would be to have PHPUnit option like --do-not-run-requirements which should be used by PHPUnit executed by Jenkins. On dev machine this option should not be used and actual executable requirements tests should have #executableRequirements meta-comment in the beginning (removed only after actual code is created and tested). Issue that PHPUnit does not have such functionality.
May be there is a better way to achieve implementation of executable requirements without "false" failures in Jenkins?
With PHPUnit, tests can be filtered for execution. Either annotate tests that should not be executed in one environment using the #group annotation and then use --exclude-group <name-of-group> or the <group> element of PHPUnit's XML configuration file or the --filter <pattern> commandline option. Both approaches are covered in the documentation.
For long time I wanted to start to use Test Driven Development in our
team. I don't see any problem with writing tests before actual code.
This is not TDD.
To quote from wikipedia:
first the developer writes an (initially failing) automated test case
that defines a desired improvement or new function, then produces the
minimum amount of code to pass that test, ...
Notice the test case in the singular.
Having said that, you are quite welcome to define your own development methodology whereby one developer write tests in the plural, commits them to version control and another developer writes code to satisfy the tests.
The solution to your dilemma is to commit the tests to a branch and the other developer work in that branch. Once all the tests are passing, merge with trunk and Jenkins will see the whole lot and give its opinion on whether the tests pass or not.
Just don't call it TDD.
I imagine it would not be very straight forward in practice to write tests without any basic framework. Hence, 'minimum amount of code to pass the test' approach as you suggested is not a bad idea.
Not necessarily a TDD approach
-Who writes the tests? If someone who works with requirements or an QA member writes the tests, you could probably simply write empty tests (so they don't fail). This approach will make sure that the developer will cover all the cases that the other person has thought about. An example test method would be public void testThatObjectUnderTestReturnsXWhenACondition, public void testThatObjectUnderTestReturnsZWhenBCondition. (I like long descriptive names so there are no confusions as to what I am thinking or you can use comments to describe your tests). The DEVs can write code and finish the tests or let someone else finish the tests later. Another way of stating this is to write executable requirements. See Cucumber/Steak/JBehave as executable requirements tools.
Having said above, we need to differentiate whether you are trying to write executable requirements or unit/integration/acceptance tests.
If you want to write executable requirements, any one can write it and could be empty to stop them from failing. DEVs will then fill it up and make sure the requirements are covered. My opinion is to let the DEVs deal with unit /integration/acceptance tests using TDD (actual TDD) and not separate the responsibility of writing code and appropriate unit/integration/acceptance tests for the code they write.
I am new to automated testing.
I try to do automated integration testing of my app with Kif framework to facilitate testing before releases. I have several test cases. When i run testing (Cmd + U) this test cases runs but in strange sequence (not in alphabetically sorted order). I also can not run single test case, when i try to do so random test case runs before test case i want to run.
P.S. Some of my test cases inherit more general test cases.
Can you give me any hints what it can be?
Thanks!
AFAIK, test cases have no defined order and they should be independent of one another. If you have unit tests that depend on execution order, you're doing testing incorrectly and need to refactor your tests to be independent.