Can not run single KIFTestCase (subclass of XCTestCase) - ios

I am new to automated testing.
I try to do automated integration testing of my app with Kif framework to facilitate testing before releases. I have several test cases. When i run testing (Cmd + U) this test cases runs but in strange sequence (not in alphabetically sorted order). I also can not run single test case, when i try to do so random test case runs before test case i want to run.
P.S. Some of my test cases inherit more general test cases.
Can you give me any hints what it can be?
Thanks!

AFAIK, test cases have no defined order and they should be independent of one another. If you have unit tests that depend on execution order, you're doing testing incorrectly and need to refactor your tests to be independent.

Related

Using XCUI tests and XC tests together

I am trying to use XCUI and XC tests together. I found this twitter post saying it was possible. However, which section in the build settings do I put those new attributes?
I ask because I tried the method and put those settings in the user defined section of the project target and it would not let me run my tests because those settings were defined.
UI tests operate like this:
The app is launched.
Tests control another process external to the app, telling the app what to do.
Unit tests operate like this:
The app is launched.
The test code is injected into the running app.
The tests are executed.
These are radically different. UI tests operate strictly from the outside. They have no access to the internals of the program. In the end, UI tests boil down to simulating user actions.
Unit tests, on the other hand, operate from the inside. They can reach anything that is non-private.
The only way for UI tests to perform something like a unit test would be to build the test functionality into the production code, accessible through gestures. There are better ways to unit test than that, namely using unit testing frameworks.
So… no. They shouldn't live together.

Can Coded UI tests and MTM be used to create a test suite that will automatically play all test cases?

When creating a test suite in selenium ide it is possible to let all test cases in a test suite run in a continuous manner and see results when finished. I'm looking into creating test suites in Microsoft test manager and possibly automating with the code with cuit, my question is, is it possible to run the tests one after another with no manual interaction, as from what I've seen so far, it seems you have to manually verify the test results in each step for MTM tests and manually verify the pass or fail status at the end of the test?
You can create a test case and tie an automated test case (Selenium/CUIT) to it in Visual Studio. This flips a flag in the test case work item to "automated", and allows you to automatically execute those test cases on test agents.
https://msdn.microsoft.com/en-us/library/dd380741.aspx

Test failing when run with XCTool with logicTestBucketSize

I have a project with over 1000 unit test and was thinking to speed up the build by using the xctool's parallelise option.
So i turned that on and set logicTestBucketSize to 50. The test run, but some are failing which are not failing when not using this option.
My question: are buckets run independently in their own sandbox or do they share global variable that the unit test might set up? which might explain some cross contamination between the tests
Yes. When running tests in parallel, xctool will run each bucket of tests in a single process, and run multiple buckets simultaneously in different processes. Additionally, you can select whether bucketing will be done on a case or class basis with -bucketBy class. You should probably use class unless you have very large test classes with many test cases.
Your tests may fail now though it didn't before because:
A test case relies on global state set up by a previous test case, even from a different test class, as long as it is grouped into the same binary. This test would now fail as the order the tests run in may be different, or not run at all.
A test alters global state and cause later tests to fail. This may not have been a problem before because that test was run after other tests that might be affected have already ran.
A good way of dealing with the first type of failures is run with bucket size 1 (either bucket-by-class mode or bucket-by-case mode, depending on what mode you'll be running later).

Automated application testing with TFS

I think I'm missing a link somewhere in how microsoft expect TFS and automated testing to work together. TFS allows us to create test cases that have test steps. These can be merged into a variety of test plans. I have this all setup and working as I would expect for manual testing.
I've now moved into automating some of these tests. I have created a new visual studio project, which relates to my test plan. I have created a test class that relates to the test case and planned to create a test method for each test step within the test class, using the ordertest to ensure that the methids are executed in the same order as the test steps.
I'd hoped that I could then link this automation up to the test case so that it could be executed as part of the test plan.
This is when it all goes wrong, It is my understanding that the association panel appears to only hook a test case up to a particular test method, not a test step?
Is my understanding correct?
Have MS missed a trick here and made things a litte too complicated or have I missed something? If I hook things upto a whole test case to a method I lose granulaity of what each is doing.
If each test step was hooked into a test method it would be possible for the assert of the test method to register a pass or fail of the overall test case.
Any help or direction so that I can improve my understanding would be appreciated.
The link is not obvious. In Visual Studio Team Explorer create and run a query to find the test case(s). Open the relevant test case and view the test automation section. On the right hand side of the test automation line there should be an ellipsis, click it and link to the test case.
I view this as pushing an automated test from Visual Studio. Confusingly you cannot pull an automated test into MTM.
You can link only one method to a test case. That one method should cover all the steps written in its associated test case including verification(assertions).
If it is getting impossible to cover all steps in one test method or if your have too many verifications is your test case, then the test case needs to be broken down to smaller test cases and each of the test cases will have one automated method associate with it.
Automation test should work like this. (Not a hard rule though..)
Start -> Do some action -> Verify (Assert) -> Finish
You can write as many assertions as possible, but if first assert fails then test won't proceed further to do other assertions. This is how manual testing also works, ie Test fails even if 1 step out of 100 fails.
For the sake of automation test maintainability it is advisable to add minimum asserts in automation test and easiest way to achieve this is by splitting the test case. Microsoft or other test automation provider works this way only and we don't write test methods for each and every steps. This would make things very complicated.
But yes, you can write reusable methods(not test methods) in your test framework for each steps and call them in your test methods. For example you don't have to write code for a test case step say "Application Login" again and again. You can write your method separately and call that in your test method which is linked to the test case.

Parameterized Functional Tests using TFS / Testing Center?

I'm trying to leverage the functionality of the TFS Test Case, which allows a user to add parameters to a test case. However, when I set up a plain vanilla Unit Test (which will become my functional / integration test), and use the Insert Parameter feature, I just don't seem to be able to access the parameter data. From the little I can find, it seems as if this parameterization is only for coded UI tests.
While it's possible for me to write a data driven unit test with the [DataSource] attribute on the test, this would mean a separate place to manage the data for the testing, potentially a new UI, etc. Not terrible but not optimal. What would be ideal is to manage everything through Testing Center but I cannot for the life of me find a description of how to get at that data inside the unit test.
Am I missing something obvious?
Either I didn't understand your question or maybe you answered it yourself :-). Let me explain:
Both Unit Tests and Coded UI Tests (in fact, most MSTest-based tests) leverage the same [DataSource] infrastructure. That way, tests can be parameterized without the need of embedding the parameter data in the test itself.
VS 2005 and VS 2008 basically offered databases (text, XML or relational ones) as valid test data sources. VS 2010 (and Microsoft Test Manager) introduced a new kind of data source: a "Test Case Data Source", which is automatically inserted in a Coded UI test generated from a test case recording.
But nothing prevents you from doing the same to your own unit tests. I think the workflow below could work for you:
Create a test case in MTM;
Add your parameters and data rows;
Save your test case. Take note of the work item ID (you're gonna need it);
Create your unit test and add the following attribute to the method header:
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "http://my-tfs-server:8080/tfs/my-collection;My-Team-Project", "WI#", DataAccessMethod.Sequential), TestMethod]
In the attribute above, replace WI# with the work item id from #3;
(Optional) In Visual Studio, go to the Test menu and click Windows | Test View. Select the unit test you just created, right-click it and "Associate Test to Test Case". Point to the same test case work item created in #3 and now you turned your manual test case in a automated test case. NOTE: When you automate a test you can no longer run it manually from MTM. You need Lab Management (and an environment configured as being able to run automated tests) in order to schedule and run an automated test case.

Resources