XCUITest Order of Test Execution - ios

Is there any known way to influence the order in which test methods are called by XCUITest? Currently it seems to be completely arbitrary which test method is called in which order.
This makes it impossible to define a specific order. In general the way how Xcode manages XCUITest runs is sub-optimal in that you have to define a test function for every single test case. It would be much more desirable to launch a complete test session from only one method so one can structure their own tests in for example session : features : scenarios.
... That's exactly what I'm trying to do because we're following a Calabash-style test structure and a framework I've wrote around XCUITest provides a lot of additional functionality (such as Testrail integration).
I've tried to implement the framework in such a way that all features and scenarios are organized and started from a single test() method. This would work if Xcode would allow for it but as it turns out terminating and launching the app between every scenario is problematic as it causes Main thread stalls or even crashes. So I've added support to our framework to do it the old-fashion Xcode way of having to define a method for every test but the order is messed up by Xcode which messes up the generation of logs and reports for Testrail.

XCUITest executes tests in alphabetical ascending order(unless randomised). You can name the test methods like the following to run in the order you want
test01FirstCase() { //test code }
test02ThirdCaseButRunsSecond() { //test code }
test03SecondCaseButRunsThird() { //test code }

If you want your app not to launch and terminate after every test method so simply take XCUIApplication().launch() method out of Setup method and put it into very first test function at top of everything, this is how app will launch only once and second test execution will start exactly from the point where first test is ended.
If you want to maintain execution sequence one way is to define any one of following order
func Test01(){}
func Test01(){}
func Test_A(){}
func Test_B(){}
func Test01_AnyString(){}
func Test02_AnyString(){}
or if you want to start all the execution from single method follow the steps below
func Test01_veryFirstTestMethod(){
secondTestMethod()
thirdTestMethod()
forthTestMethod()
so on............
}
Note: your secondTestMethod, thirdTestMethod, forthTestMethod won't have test keyword in function declaration

Related

Mock initial viewController - code coverage results

I've a problem and I don't know the solution even after several hours of try and error and googling and stackoverflowing.
I have a view controller. I would like to pass per dependency injection an object. This object derives from a protocol. In general it is not a problem to setup a unit test. Also mocking works and the unit tests are running. So where's the problem?
I am testing only one class in my primary target. This class has absolutely nothing to do with view controller. But the code coverage is showing me a decent percent value of covering the view controllers. After a while I found out that when I hit the "test" button the project gets executed as if I push the "run" button. And because of that the view controller gets initialized and created and I have no chance to pass another dependency first, or before the tests getting executed.
So I need a method to distinguish between a test run and a real run, to pass in one case a real object and in the other case the fake object.
And my question is, how to to that? I wonder why nobody have this problem. I mean what gives me the code coverage tool if it shows me that methods are covered even though I haven't tested them.
The one and only class that I am testing:
And these are the coverage results (The bars are just gray because Xcode lost focus during screenshot. Otherwise they are blue.):
So I was expecting to see covered in the results just the class I am testing and not everything else. I know why this problem persists. The view controller has a dependency and this dependency after it gets initialized creates some more classes and so on. What I would like to do is to pass a fake object during unit testing and a real object during a real run. Just like It works in Visual Studio for non ui tests: If the tests are executed the application does NOT start up. The test runner just initialize the subjects under test and that's all. And this is what I want to achieve for iOS unit tests. I guess I've missed sth. very important :(
For all of us who have or will have the same problem. The solution is to specify environment variables for the test run. After that you can check if you are running your unit tests with code like this (assuming you have created an environment variable called "InTestMode" and set the value to "1" during test run:
let dict = NSProcessInfo.processInfo().environment
if let env = dict["InTestMode"] as? String?
{
return env == "1"
}
return false

How to use tests in Xcode (iOS application)

In every new project I see the file with the content (and this file belongs to different target ProjectNameTests):
- (void)setUp {
[super setUp];
// Put setup code here. This method is called before the invocation of each test method in the class.
}
- (void)tearDown {
// Put teardown code here. This method is called after the invocation of each test method in the class.
[super tearDown];
}
- (void)testExample {
// This is an example of a functional test case.
XCTAssert(YES, #"Pass");
}
- (void)testPerformanceExample {
// This is an example of a performance test case.
[self measureBlock:^{
// Put the code you want to measure the time of here.
}];
}
1) How to use this file?
2) How can I test my application with this file? Why it's better then just launching the application and testing it or then to write test code directly in AppDelegate (when I need to test the responce from server for instance)?
These are called Unit Tests and you can test your components, like classes and functions, with them. You write a Test and if you change something on your classes, the test will show you if the class still works as expected.
For an example: http://www.preeminent.org/steve/iOSTutorials/XCTest/
You should also read a bit about test driven development, it is a nice way of developing and mandatory for most companies.
As a down side on XCode UnitTests i have to tell you that you can't use operations on files within them.
1) To use that file, you replace the testExample and testPerformanceExample with your own methods that begin with test. For example, if you want to test that your Foo class object returns true for isFoo:
- (void)testFooIsFoo() {
Foo f = [Foo new];
XCTAssertTrue([f isFoo], "Foo object should return true for isFoo");
}
2) Why is this better than manual testing? You can write smaller tests for parts that might be more time consuming to reach in the app. It also allows you to test things in isolation. So, rather than test that the network stack works, these kinds of tests are good for testing that you handle data correctly and so on.
Good testing requires a mix of unit testing (which is what this is) and manual testing.
Apple's documentation is light on the why and what you should unit test. But this is a good article from objc.io about testing this way and how to do it with XCTest.
Apple provide a nice documentation. But if you want to see real code, I have a github project that implements tests.
The main purpose ox XCTests is that you can launch tests from Xcode directly without building and running the program. Xcode provide a nice interface to do that.

iOS and XCode: understanding automatic testing

I have created a test project and noticed that XCode generates some test class named: YourProjectName-ProjectTests
This is how it looks like:
- (void)setUp {
[super setUp];
// Put setup code here. This method is called before the invocation of each test method in the class.
}
- (void)tearDown {
// Put teardown code here. This method is called after the invocation of each test method in the class.
[super tearDown];
}
- (void)testExample {
// This is an example of a functional test case.
XCTAssert(YES, #"Pass");
}
- (void)testPerformanceExample {
// This is an example of a performance test case.
[self measureBlock:^{
// Put the code you want to measure the time of here.
}];
}
Would anyone be able to explain me how to use these automated testing methods? Is there an official documentation on this?
Are those test meant to be used as UI testing or as app code testing? Or both?
You an perform UI tests through XCTest but it's designed for unit testing. Instruments has UI Automation which is designed for UI testing.
Opening up XCTestAssertions.h will give you a pretty good reference for what's included in XCTest. Generally all the assertions follow a similar pattern:
XCTAssertSomething(somethingBeingTested, #"A message to display if the test fails");
Hitting ⌘ 5 will bring up the test navigator which lets you run all of your tests, all of your failed tests, or individual tests. You can also run a test by clicking on the diamond shape to the left of the method name in the editor.
Here are a few tips I've come across when unit testing with XCTest:
The last argument in assertions is optional but in some cases it's useful to describe the behaviour you're looking for.
Tests are not run in any specific order, but your tests shouldn't rely on each other anyway.
Mock objects let you test far, far more than basic assertions. I like OCMock.
You can use instruments on your tests for debugging.
Finally, if you have access to another machine you can set up OSX Server to automatically run your tests daily or on every commit and notify you if any of the tests fail.
This is how Unit Testing is implemented in Xcode.
The setUp and TearDown methods are to prepare any mock data necessary to test a unit (which is select part of code).
The two test methods there are just an example. Usually as an author of the unit of code, you know well what the unit supposed to do and what it should not.
Using test methods you test how the assumptions and expectations are satisfied by the unit. The convention is to have a separate test method for each assumption/expectation of a behaviour of the unit.
For example given class that handles strings that has a method to to make all strings CAPS, you would test that passing any random nonCaps string would still return ALL CAPS. An putting none truing as a parameter would return nil or error.
So basically you test your units to behave as expected.
There's a whole theory behind Unit Testing which is out of scope of this thread. Just Google it.

Shared tests in XCTest test suites

I have a class which can be configured to do 2 slightly different things. I want to test both paths. The class is a descendant of UIViewController and most of the configuration takes place in Interface Builder. i need to verify that both Storyboard Scenes and their Outlets are wired up in the same way, but also need to check for the difference in behavior.
I'd like to use shared XCTest suites for this purpose.
One is aimed for use with the left hand, one for the right. Both appear after another when using the app. The first one (right hand) triggers a segue to the other. The last one (left hand) should trigger a different segue. This is where it differs, for example.
Now I want to verify the segues with tests. I'd like to create a BothHandSharedTests suite which both view controller instance tests use to verify everything they have in common. However, the BothHandSharedTests class is treated as a self-containing test suite, which it clearly isn't.
I came up with these strategies:
inherit from an abstract XCTest descendant, like described above (doesn't seem to be that easy),
write a test auite for the common properties and use one of the two as the Object Under Test, and add two smaller suites for the differences.
How would you solve this problem?
Here's a solution in Swift:
class AbstractTests: XCTestCase {
// Your tests here
override func perform(_ run: XCTestRun) {
if type(of: self) != AbstractTests.self {
super.perform(run)
}
}
}
I don't have a conclusive answer, but here's what I ended up doing.
I first tried the subclassing route. In the parent test (the "AbstractTestCase") I implemented all the tests that would be executed by the the AbstractTestCase subclasses, but added a macro so they don't get run by the actual parent test:
#define DONT_RUN_TEST_IF_PARENT if ([[self className] isEqualToString:#"AbstractTestCase"]) { return; }
I then added this macro to the start of every test, like so:
- (void)testSomething
{
DONT_RUN_TEST_IF_PARENT
... actual test code ...
}
This way, in the ConcreteTestCase classes which inherit from AbstractTestCase, all those tests would be shared and run automatically. You can override -setUp to perform the necessary class-specific set-up, of course.
However – This turned out to be a crappy solution for a couple reasons:
It confuses Xcodes testing UI. You don't really get to see a live representation of what's running, and tests sometimes don't show up as intended. This makes clicking through to debug test failures difficult or impossible.
It confuses XCTest itself – I found that tests would often get run even when I didn't ask them too (if I were just trying to run a single test) and the so the test output wouldn't be what I would expect.
Honestly it felt a little janky to have that macro – macros which redirect flow control are never really that good of an idea.
Instead, I'm now using a shared object, a TestCaseHelper, which is instantiated for each test class, and has has a protocol/delegate pattern common to all test cases. It's less DRY – most test cases are just duplicates of the others – but at least they are simple. This way, Xcode doesn't get confused, and debugging failures is still possible.
A better solution will likely have to come from Apple, unless you're interested in ditching your entire test suite for something else.

Filtering code coverage by calling function in OpenCover

I have some integration tests written for MsTest. The integration tests have the following structure:
[TestClass]
public class When_Doing_Some_Stuff
{
[TestInitialize]
protected override void TestInitialize()
{
// create the Integration Test Context
EstablishContext();
// trigger the Integration Test
When();
}
protected void EstablishContext()
{
// call services to set up context
}
protected override void When()
{
// call service method to be tested
}
[TestMethod]
public void Then_Result_Is_Correct()
{
// assert against the result
}
}
I need to filter the code coverage results of a function by who is calling it. Namely, I want the coverage to be considered only if the function is called from a function named "When" or which has a certain attribute applied to it.
Now, even if a certain method in the system is called in the EstablishContext part of some tests, the method is marked as visited.
I believe there is no filter for this and I would like to make the changes myself, as OpenCover is... well.. open. But I really have no idea where to start. Can anyone point me in the right direction?
You might be better addressing this with the OpenCover developers; hmmm... that would be me then, if you look on the wiki you will see that coverage by test is one of the eventual aims of OpenCover.
If you look at the forking you will see a branch from mancau - he initially indicated that he was going to try to implement this feature but I do not know how far he has progressed or if he has abandoned his attempt (what he has submitted is just a small re-introduction of code to allow the tracing of calls).
OpenCover tracks by emitting a visit identifier and updating the next element in an array that resides in shared memory (shared between the profiler (C++/native/32-64bit) and the console (C#/managed/any-cpu)). What I suggested to him was (and this will be my approach when I get round to it, if no one else does and is why I emit the visit data in this way) that he may want to add markers into the sequence to indicate that he has entered/left a particular test method (filtered on [TestMethod] attribute perhaps) and then when processing the results in the console this can then be added to the model in some way. Threading may also be a concern as this could cause the interleaving of visit points for tests run in parallel.
Perhaps you will think of a different approach and I look forward to hearing your ideas.

Resources