Using XCUI tests and XC tests together - ios

I am trying to use XCUI and XC tests together. I found this twitter post saying it was possible. However, which section in the build settings do I put those new attributes?
I ask because I tried the method and put those settings in the user defined section of the project target and it would not let me run my tests because those settings were defined.

UI tests operate like this:
The app is launched.
Tests control another process external to the app, telling the app what to do.
Unit tests operate like this:
The app is launched.
The test code is injected into the running app.
The tests are executed.
These are radically different. UI tests operate strictly from the outside. They have no access to the internals of the program. In the end, UI tests boil down to simulating user actions.
Unit tests, on the other hand, operate from the inside. They can reach anything that is non-private.
The only way for UI tests to perform something like a unit test would be to build the test functionality into the production code, accessible through gestures. There are better ways to unit test than that, namely using unit testing frameworks.
So… no. They shouldn't live together.

Related

How to run in a specific order many e2e files with Detox with Jest for a React Native app?

As stated I'm testing a React Native app with Detox (and Jest) and I'd like to have several e2e files with different purposes -e.g.: login, fill a form and so- and run them in a specific order (the log in e2e file should go first). Running them in random order wouldn't do the job.
The goal is to avoid having one huge file.
Note: I'm running the tests on iOS simulator.
Short answer: you can't, but keep reading.
Jest's conceptual model is that each test file is a unit and is totally isolated from the others. It makes things easier to reason about and it allows parallelisation. If your tests need to be run in a specific order then they are logically a single unit so need to be specified in a single test file.
However that doesn't preclude you from splitting your tests into several files. You can have one file which Jest recognises (e.g. full-suite.e2e.js) and have that file include several others (e.g. login.js, forms.js, etc.). That way Jest runs everything as one file, in your specified order, yet you can organise your individual tests in a way that makes logical sense to you.

How to leave simulator open after test run?

When running a UI Test, how do I keep the simulator open so I can manually test additional steps?
After a UI Test completes, the simulator will shut down. I'd like to use the UI Test as an automation to reach a certain point in the app. Then I'll do a few additional things not covered by the test.
I don't think there's an option for that. You should separate your automatic and manual testing. Automatic testing ideally should be done on a CI. You should do your manual testing separately from UI tests.

How to functionally test dependent Grails applications

I am currently working on a distributed system consisting of two Grails apps (3.x), let's call them A and B, where A depends on B. For my functional tests I am wondering: How can I automatically start B when I am running the test suite of A? I am thinking about something like JUnit rules, but I could not find any docs on how to programmatically start/manage Grails apps.
As a side note, for nice and clean IDE integration I do not want to launch B as part of my build test phase, but as part of my test suite setup.
A couple of months later and more deeply in the topic of Microservices I would suggest not to consider system tests as candidates for one single project - while I would still keep my unit- and service-level tests (i.e. API testing with mocked collaborators) in one project with the affected service, I would probably spin up a system landscape via gradle and docker and then run an end-to-end test suite in the form of UI tests.

Can not run single KIFTestCase (subclass of XCTestCase)

I am new to automated testing.
I try to do automated integration testing of my app with Kif framework to facilitate testing before releases. I have several test cases. When i run testing (Cmd + U) this test cases runs but in strange sequence (not in alphabetically sorted order). I also can not run single test case, when i try to do so random test case runs before test case i want to run.
P.S. Some of my test cases inherit more general test cases.
Can you give me any hints what it can be?
Thanks!
AFAIK, test cases have no defined order and they should be independent of one another. If you have unit tests that depend on execution order, you're doing testing incorrectly and need to refactor your tests to be independent.

Automated application testing with TFS

I think I'm missing a link somewhere in how microsoft expect TFS and automated testing to work together. TFS allows us to create test cases that have test steps. These can be merged into a variety of test plans. I have this all setup and working as I would expect for manual testing.
I've now moved into automating some of these tests. I have created a new visual studio project, which relates to my test plan. I have created a test class that relates to the test case and planned to create a test method for each test step within the test class, using the ordertest to ensure that the methids are executed in the same order as the test steps.
I'd hoped that I could then link this automation up to the test case so that it could be executed as part of the test plan.
This is when it all goes wrong, It is my understanding that the association panel appears to only hook a test case up to a particular test method, not a test step?
Is my understanding correct?
Have MS missed a trick here and made things a litte too complicated or have I missed something? If I hook things upto a whole test case to a method I lose granulaity of what each is doing.
If each test step was hooked into a test method it would be possible for the assert of the test method to register a pass or fail of the overall test case.
Any help or direction so that I can improve my understanding would be appreciated.
The link is not obvious. In Visual Studio Team Explorer create and run a query to find the test case(s). Open the relevant test case and view the test automation section. On the right hand side of the test automation line there should be an ellipsis, click it and link to the test case.
I view this as pushing an automated test from Visual Studio. Confusingly you cannot pull an automated test into MTM.
You can link only one method to a test case. That one method should cover all the steps written in its associated test case including verification(assertions).
If it is getting impossible to cover all steps in one test method or if your have too many verifications is your test case, then the test case needs to be broken down to smaller test cases and each of the test cases will have one automated method associate with it.
Automation test should work like this. (Not a hard rule though..)
Start -> Do some action -> Verify (Assert) -> Finish
You can write as many assertions as possible, but if first assert fails then test won't proceed further to do other assertions. This is how manual testing also works, ie Test fails even if 1 step out of 100 fails.
For the sake of automation test maintainability it is advisable to add minimum asserts in automation test and easiest way to achieve this is by splitting the test case. Microsoft or other test automation provider works this way only and we don't write test methods for each and every steps. This would make things very complicated.
But yes, you can write reusable methods(not test methods) in your test framework for each steps and call them in your test methods. For example you don't have to write code for a test case step say "Application Login" again and again. You can write your method separately and call that in your test method which is linked to the test case.

Resources