Why no conditional statements in functional testing frameworks like KIF? - ios

I'm new to iOS, xcode, KIF framework, Objective C. And my first assignment is to write test code using KIF. It sure seems like it would be a lot easier if KIF had conditional statements.
Basically something like:
if ([tester existsViewWithAccessibilityLabel:#"Login"]) {
[self login];
}
// continue with test in a known state
When you run one test at a time KIF exits the app after the test. If you run all your tests at once, it does not exit in between tests - requiring testers be very, very careful of the state of the application (which is very time consuming and not fun).

Testing frameworks typically don't implement if conditions as they already exist in their native form.
You can look at the testing framework's source code to find how it does the "If state checks". This will teach you to fish on how to do most things you may want to do (even if is not always a good idea to do them during a functional test). You could also look here: Can I check if a view exists on the screen with KIF?
Besides, your tests should be assertive in nature as follow the following workflow:
given:
the user has X state setup
(here you write code to assertively setup the state)
It is OK and preferred to isolate your tests and setup
the "given" state (e.g. set login name in the model directly without going
through the UI) as long as you have covered that behavior in another test.
When:
The user tries to do X
(here you tap something etc..)
Then:<br>
The system should respond with Z
(here you verify the system did what you need it)

The first step in every test should be to reset the app to a known state, because it's the only way to guarantee repeatable testing. Once you start putting conditional code in the tests themselves, you are introducing unpredictability into the results.

You can always try the method tryFindingViewWithAccessibilityLabel:error: which returns true if it can find it and false otherwise.
if ([tester tryFindingViewWithAccessibilityLabel(#"login") error: nil]){
//Test things
}

Related

XCUITest: How to jump into app code? How to modify the state of the app under test?

Coming from an Android/Espresso background, I am still struggling with XCUITest and UI testing for iOS.
My question is about two related but distinct issues:
How to compile and link against sources of the app under test?
How to jump into methods of the app under test and modify its state at runtime?
To tackle these questions, we should first understand the differences between XCode's "unit test targets" and "UI test targets".
XCUITests run inside a completely separate process and cannot jump into methods of the app under test. Moreover, by default, XCUITests are not linked against any sources of the app under test.
In contrast, XCode's unit tests are linked against the app sources. There is also the option to do "#testable imports". In theory, this means that unit tests can jump into arbitrary app code. However, unit tests do not run against the actual app. Instead, unit tests run against a stripped-down version of the iOS SDK without any UI.
Now there are different workarounds for these constraints:
Add some selected source files to a UI test target. This does not enable to call into the app, but at least it enables to share selected code between app and UI tests.
Pass launch arguments via CommandLine.arguments from the UI tests to the app under test. This enables to apply test-specific configurations to the app under test. However, those launch arguments need to be parsed and interpreted by the app, which leads to a pollution of the app with testing code. Moreover, launch arguments are only a non-interactive way to change the behavior of the app under test.
Implement a "debug UI" that is only accessible for XCUITest. Again, this has the drawback of polluting app code.
This leads to my concluding questions:
Which alternative methods exist to make XCUI tests more powerful/dynamic/flexible?
Can I compile and link UI tests against the entire app source and all pod dependencies, instead of only a few selected files?
Is it possible to gain the power of Android's instrumented tests + Espresso, where we can perform arbitrary state modifications on the app under test?
Why We Need This
In response to #theMikeSwan, I would like to clarify my stance on UI test architecture.
UI Tests should not need to link to app code, they are designed to
simulate a user tapping away inside your app. If you were to jump into
the app code during these tests you would no longer be testing what
your app does in the the real world you would be testing what it does
when manipulated in a way no user ever could. UI tests should not have
any need of any app code any more than a user does.
I agree that manipulating the app in such a way is an anti-pattern that should be only used in rare situations.
However, I have a very different stance on what should be possible.
In my view, the right approach for UI tests is not black-box testing but gray-box testing. Although we want UI tests to be as black-boxy as possible, there are situations where we want to dig deep into implementation details of the app under test.
Just to give you a few examples:
Extensibility: No UI testing framework can provide an API for each and every use case. Project requirements are different and there are times where we want to write our own function to modify the app state.
Internal state assertions: I want to be able to write custom assertions for the state of the app (assertions that do not only rely on the UI). In my current Android project, we had a notoriously broken subsystem. I asserted this subsystem with custom methods to guard against regression bugs.
Shared mock objects: In my current Android project, we have custom hardware that is not available for UI tests. We replaced this hardware with mock objects. We run assertions on those mock objects right from the UI tests. These assertions work seamlessly via shared memory. Moreover, I do not want to pollute the app code with all the mock implementations.
Keep test data outside: In my current Android project, we load test data from JUnit right into the app. With XCUITest's command line arguments, this would be way more limited.
Custom synchronization mechanisms: In my current Android project, we have wrapper classes around multithreading infrastructure to synchronize our UI tests with background tasks. This synchronization is hard to achieve without shared memory (e.g. Espresso IdlingResources).
Trivial code sharing: In my current iOS project, I share a simple definition file for the aforementioned launch arguments. This enables to pass launch arguments in a typesafe way without duplicating string literals. Although this is a minor use case, it still shows that selected code sharing can be valuable.
For UI tests you shouldn't have to pollute your app code too much. You
could use a single command line argument to indicate UI tests are
running and use that to load up some test data, login a test user, or
pick a testing endpoint for network calls. With good architecture you
will only need to make the adjustment once when the app first launches
with the rest of your code oblivious that it is using test data (much
like if you have a development environment and a production
environment that you switch between for network calls).
This is exactly the thing that I am doing in my current iOS project, and this is exactly the thing that I want to avoid.
Although a good architecture can avoid too much havoc, it is still a pollution of the app code. Moreover, this does not solve any of the use cases that I highlighted above.
By proposing such a solution, you essentially admit that radical black-box testing is inferior to gray-box testing.
As in many parts of life, a differentiated view is better than a radical "use only the tools that we give you, you should not need to do this".
UI Tests should not need to link to app code, they are designed to simulate a user tapping away inside your app. If you were to jump into the app code during these tests you would no longer be testing what your app does in the the real world you would be testing what it does when manipulated in a way no user ever could. UI tests should not have any need of any app code any more than a user does.
For unit tests and integration tests of course you use #testable import … to get access to any methods and properties that are not marked private or fileprivate. Anything marked private or fileprivate will still be inaccessible from test code, but everything else including internal will be accessible. These are the tests where you should intentionally throw data in that can't possibly occur in the real world to make sure your code can handle it. These tests should still not reach into a method and make any changes or the test won't really be testing how the code behaves.
You can create as many unit test targets as you want in a project and you can use one or more of those targets to hold integration tests rather than unit tests. You can then specify which targets run at various times so that your slower integration tests don't run every time you test and slow you down.
The environment unit and integration tests run in actually has everything. You can create an instance of a view controller and call loadViewIfNeeded() to have the entire view setup. You can then test for the existence of various outlets and trigger them to send actions (Check out UIControl's sendActions(for: ) method). Provided you have setup the necessary mocks this will let you verify that when a user taps button A, a call gets sent to the proper method of thing B.
For UI tests you shouldn't have to pollute your app code too much. You could use a single command line argument to indicate UI tests are running and use that to load up some test data, login a test user, or pick a testing endpoint for network calls. With good architecture you will only need to make the adjustment once when the app first launches with the rest of your code oblivious that it is using test data (much like if you have a development environment and a production environment that you switch between for network calls).
If you want to learn more about testing Swift Paul Hudson has a very good book you can check out https://www.hackingwithswift.com/store/testing-swift. It has plenty of examples of the various kinds of tests and good advice on how to split them up.
Update based on your edits and comments:
It looks like what you really want is Integration Tests. These are easy to miss in the world of Xcode as they don't have their own kind of target to create. They use the Unit Test target but test multiple things working together.
Provided you haven't added private or fileprivate to any of your outlets you can create tests in a Unit Test target that makes sure the outlets exist and then inject text or trigger their actions as needed to simulate a user navigating through your app.
Normally this kind of testing would just go from one view controller to a second one to test that the right view controller gets created when an action happens but nothing says it can't go further.
You won't get images of the screen for a failed test like you do with UI Tests and if you use storyboards make sure to instantiate your view controllers from the storyboard. Be sure that you are grabbing any navigation controllers and the such that are required.
This methodology will allow you to act like you are navigating through the app while being able to manipulate whatever data you need as it goes into various methods.
If you have a method with 10 lines in it and you want to tweak the data between lines 7 and 8 you would need to have an external call to something mockable and make your changes there or use a breakpoint with a debugger command that makes the change. This breakpoint trick is very useful for debugging things but I don't think I would use it for tests since deleting the breakpoint would break the test.
I had to do it for a specific app. What we did is creating a kind of debug menu accessible only for UI tests (using launch arguments to make it available) and displayed using a certain gesture (two taps with two fingers in our case).
This debug menu is just a pop-up view appearing over all screens. In this view, we add buttons which allow us updating the state of the app.
You can then use XCUITest to display this menu and interact with buttons.
I've come across this same problem OP. Coming from the Android ecosystem and attempting to leverage solutions for iOS will have you banging your head as to why Apple does things this way. It makes things difficult.
In our case we replicated a network mocking solution for iOS that allows us to control the app state using static response files. However, Using a standalone proxy to do this makes running XCUITest difficult on physical devices. Swift provides some underlying Foundations.URLSession features that allow you to do the same thing from inside of the configured session objects. (See URLProtocolClasses)
Now, our UI tests are having an IPC problem since the runner app is in its own process. Prior the proxy and UI tests lived in the same process so it was easy to control responses returned for certain requests.
This can be done using some odd bridges for interprocess communication, like CFMessaging and some others (see NSHipster here)
Hope this helps.
how exactly does the unit test environment work
The unit tests are in a bundle which is injected into the app.
Actually several bundles are injected. The overarching XCTest bundle is a framework called XCTest.framework and you can actually see it inside the built app:
Your tests are a bundle too, with an .xctest suffix, and you can see that in the built app as well:
Okay, let's say you ask for one or more tests to run.
The app is compiled and runs in the normal way on the simulator or the device: for example, if there is a root view controller hierarchy, it is assembled normally, with all launch-time events firing the way they usually do (for instance, viewDidLoad, viewDidAppear, etc.).
At last the launch mechanism takes its hands off. The test runner is willing to wait quite a long time for this moment to arrive. When the test runner sees that this moment as arrived, it executes the test bundle's executable and the tests begin to run. The test code is able to see the main bundle code because it has imported the main bundle as a module, so it is linked to it.
When the tests are finished, the app is abruptly torn down.
So what about UI tests?
UI tests are completely different.
Nothing is injected into your app.
What runs initially is a special separate test runner app, whose bundle identifier is named after your test suits with Runner appended; for example, com.apple.test.MyUITests-Runner. (You may even be able to see the the test runner launch.)
The test runner app in turn backgrounds itself and launches your app in its own special environment and drives it from the outside using the Accessibility framework to "tap" buttons and "see" the interface. It has no access to your app's code; it is completely outside your app, "looking" at its interface and only its interface by means of Accessibility.
We use GCDWebServer to communicate between test and the app under test.
The test asks the app under test to start this local server and then test can talk to app using this server. You can make requests on this server to fetch some data from app as well as to tell app to modify some behavior by providing data.

What are the best practices for structuring an XCTest UI Test Suite?

I am setting up a test suite for an iOS app. I am using Xcode's XCTest framework. This is just a UI test suite. Right now I have a test target with one file TestAppUITests. All of my test cases reside in this file (first question: should all of my test cases be in here or do I need more files?)
Within this file, I have a bunch of test cases that work as if it's a user using the app. Logging in/ Create account then login -> Navigate around app -> Check if UI elements loaded -> Add additional security like secondary email address -> logout.
How should these be ordered?
I've researched all around, found some gems here and there but still have questions. Within the test target, should you have multiple files? I just have the one within my UITest target.
My other big question is a bit harder to explain. Each time we run the test suite from the beginning the app starts in a state where you are not logged in, so, for example, in order to test something like navigate WITHIN the app, I need to run that test to login first. Right now I have it setup so that the login test runs once, then all other tests after it, then ends with logout. But that one file TestAppUITests is getting very long with tons of test cases. Is this best practice?
Soo. Lets divide this into more parts:
1/ Should all of my test cases be in here or do I need more files?
Well - your tests are same as any other app code, you have. Do you have all of the app code in one file? Probably not, so a good practice is to divide your tests into more classes (I do it in the way of what they test - LoginTests class, UserProfileTests class etc etc.).
To go further with this - I have test classes and methods divided into separate files - f.e. I have a method, that will do a login in UI test, so I have the method in UITestCase+Login extension (UITestCase is a class, that is extended by all these UITestCase+Something extensions) and I have the test, that will do the login in LoginTests in which I am calling the login method from the UITestCase+Login extension.
However - you don't necessarily need more test classes - if you decide to have all of your UI tests in one class, thats your choice. Everything's gonna work, but having the tests and methods that these tests use in separate files is just a good practice for future development of tests and methods.
2/ ... Add additional security like secondary email address... How should these be ordered?
Order them into methods and call these in tests.
This is my method for expecting some UI message, when I use invalid login credentials:
func expectInvalidCredentialsErrorMessageAfterLoggingWithInvalidBIDCredentials() {
let alertText = app.alerts.staticTexts[localizedString("mobile.account.invalid_auth_credentials")]
let okButton = app.alerts.buttons[localizedString("common.ok")]
wait(
until: alertText.exists && okButton.existsAndHittable,
timeout: 15,
or: .fail(message: "Alert text/button are not visible.")
)
okButton.tap()
}
And this is my use of it in test:
func testTryToLoginWitMissingBIDAndExpectError() {
let inputs = BIDLoginInputs(
BID: "",
email: "someemail#someemail.com",
departureIATA: "xxx",
dayOfDeparture: "xx",
monthOfDeparture: "xxx",
yearOfDeparture: "xxx"
)
loginWithBIDFromProfile(inputs: inputs)
expectInvalidCredentialsErrorMessageAfterLoggingWithInvalidBIDCredentials()
}
You can see, that tests are very readable and that they (almost fully) consists of methods, which are re-usable in more tests.
3/ Within the test target, should you have multiple files?
Again - this is up to you, however having everything in one file is not really good for maintenance and future development of these tests.
4/ ... Each time we run the test suite from the beginning the app starts in a state where you are not logged in...Right now I have it setup so that the login test runs once, then all other tests after it, then ends with logout...
Not a good approach (in my humble point of view ofc.) - put the functionality into methods (yes, I repeat myself here :-) ) and divide the test cases into more files (ideally by the nature of their functionality, by "what they do").
Hope, this helped you, I struggled a lot with the same questions, when I was starting with iOS UI tests too.
Oh. And btw - my article on medium.com about advanced tactics and approaches to iOS UI tests with XCTest is coming out in a few days, I can put a link to it, once it is out - that should help you even further.
Echoing this answer, it is against best practices to store all of your tests for a complex application in a single file, and your tests should be structured so that they are independent of each other, only testing a single behaviour in each test.
It may seem counter-intuitive to split everything up into many tests which need to go through re-launching your app at the beginning of every test, but this makes your test suite more reliable and easier to debug as the number of unknowns in a test is minimised by having smaller, shorter tests. Since UI tests take a relatively long time to run, which can be frustrating, you should try to minimise the amount that you need by ensuring your app has good unit/integration test coverage.
With regards to best practices on structuring your XCTest UI tests, you should look into the Screenplay or Page Object models. The Page Object model has been around a long time, and there are many posts around it although many tend to be focused on Selenium or Java-based test frameworks. I have written posts on both the Page Object model and the Screenplay model using Swift and XCTest, they should help you.

Xcode 7 UI Test Order

I have several UI tests that I can successfully individually or grouped. I ended up breaking my tests up into specific classes and running them that way. The issue I've come across is Xcode executes the UI tests in alphabetical order and not in order it is written/displayed. Any idea on how to get around that?
Thank you
A good test suite shouldn't depend on being executed in a specific order. If yours does, you might have some test pollution. I would add common initialization logic (e.g. logging the user in) to the setUp() method of the relevant tests. Or create a helper method and share that between classes. That, combined with relaunching the app for every test, should make the order of your tests irrelevant.
XC testing is incredibly buggy. Sometimes it seems like the direction of the wind or speed of the Earth's rotation will determine if you get a random failure or not. One fix I found that somewhat alleviates these frustrating issues are if you call this is your tearDown() function:
XCUIApplication().terminate()
Where XCUIApplication() is the application that you're running.

TDD: Unit Testing Asynchronous Calls

guys:
I'm working on an application, and building it with unit testing.
However, I'm now in a situation where I need to test asynchronous calls.
For example,
- (void)testUserInfoBecomesValidWhenUserIsBuiltSuccessfully
{
if ( ![userBuilder userInfoUpToDate] )
{
[userBuilder buildUser];
}
STAssertTrue([userBuilder userInfoUpToDate], #"User information is not valid before building the user");
}
What is the general practice for testing such things?
userInfoUpToDate is expected to be updated asynchronously.
Thanks!
William
Sometimes there is a temptation to test things which you don't usually test using Unit Testing. This basically comes from misunderstanding and desire to test everything. And then you realize you don't know how to test it with unit-testing.
You would better ask yourself - what do I test here?
Do I test that the data is not available until request completes?
Then you can write non-async version of the test which will check that the data is available after request completes.
Do I test that the response saved correctly after request?
You can also test it using flags in your logic.
You can do all logic tests without running asynchronous tests.
So at the bottom I would even ask you why do you think you need to test async call?
The unit tests supposed to run quickly - so consider it as another reason to not test async calls. Imagine continuous integration system which runs these test - it will need extra time.
And reading your comments to another answer - I think it's not common to use async in testing at all. E.g. Kent Beck in TDD book. mentioned that Concurrent Unit Testing is possible but very rare case.
So - what & why you really want to test?
Use a run loop, polling until completion or a timeout is reached:
https://codely.wordpress.com/2013/01/16/unit-testing-asynchronous-tasks-in-objective-c/

ruby cucumber testing practices

I have many cucumber feature files, each consists of many scenarios.
When run together, some of them fails.
When I run each single test file, they passes.
I think my database is not correctly clean after each scenario.
What is the correct process to determine what is causing this behavior ?
By the sound of it your tests are depening upon one another. You should be trying to get each indervidual test to do what ever set up is required for that indervidual test to run.
The set up parts should be done during the "Given" part of your features.
Personally, to stop the features from becoming verbose and to keep them close to the business language that they where written in, i sometimes add additional steps that are required to do the setup and call them from the steps that are in the feature file.
If this makes sence to you
This happens to me for different reasons and different times.
Sometimes its that a stub or mock is invoked in one scenario that screws up another, but only when they are both run (each is fine alone).
The only way I've been able to solve these is debugging while running enough tests to get a failure. You can drop the debugger line in step_definitions or call it as a step itself (When I call the debugger) and match that up to a step definition that just says 'debugger' as the ruby code.

Resources