Automated application testing with TFS - tfs

I think I'm missing a link somewhere in how microsoft expect TFS and automated testing to work together. TFS allows us to create test cases that have test steps. These can be merged into a variety of test plans. I have this all setup and working as I would expect for manual testing.
I've now moved into automating some of these tests. I have created a new visual studio project, which relates to my test plan. I have created a test class that relates to the test case and planned to create a test method for each test step within the test class, using the ordertest to ensure that the methids are executed in the same order as the test steps.
I'd hoped that I could then link this automation up to the test case so that it could be executed as part of the test plan.
This is when it all goes wrong, It is my understanding that the association panel appears to only hook a test case up to a particular test method, not a test step?
Is my understanding correct?
Have MS missed a trick here and made things a litte too complicated or have I missed something? If I hook things upto a whole test case to a method I lose granulaity of what each is doing.
If each test step was hooked into a test method it would be possible for the assert of the test method to register a pass or fail of the overall test case.
Any help or direction so that I can improve my understanding would be appreciated.

The link is not obvious. In Visual Studio Team Explorer create and run a query to find the test case(s). Open the relevant test case and view the test automation section. On the right hand side of the test automation line there should be an ellipsis, click it and link to the test case.
I view this as pushing an automated test from Visual Studio. Confusingly you cannot pull an automated test into MTM.

You can link only one method to a test case. That one method should cover all the steps written in its associated test case including verification(assertions).
If it is getting impossible to cover all steps in one test method or if your have too many verifications is your test case, then the test case needs to be broken down to smaller test cases and each of the test cases will have one automated method associate with it.
Automation test should work like this. (Not a hard rule though..)
Start -> Do some action -> Verify (Assert) -> Finish
You can write as many assertions as possible, but if first assert fails then test won't proceed further to do other assertions. This is how manual testing also works, ie Test fails even if 1 step out of 100 fails.
For the sake of automation test maintainability it is advisable to add minimum asserts in automation test and easiest way to achieve this is by splitting the test case. Microsoft or other test automation provider works this way only and we don't write test methods for each and every steps. This would make things very complicated.
But yes, you can write reusable methods(not test methods) in your test framework for each steps and call them in your test methods. For example you don't have to write code for a test case step say "Application Login" again and again. You can write your method separately and call that in your test method which is linked to the test case.

Related

Is there a way to associate Specflow+Runner test cases with Azure Test plan?

We are looking for some solutions around associating our automation test cases with Azure test plan with Specflow+Runner.
Tech stack:
Visual Studio 2017/19
.Net Framework 4.8
Specflow 3.1.97
SpecRun.Runner/SpecRun.Specflow 3.2.31
We have recently started using Specflow+ Runner and previously we were using Specflow with Xunit. When I am trying to associate a test case in Visual Studio, we are facing below issue. This used to work perfectly when we were using Xunit.
If you have come across this situation and have got a solution or workaround, please do share with us. Basically we are looking at something that can tag our automation test cases in test plan.
Below is the link that provides information about associating test cases, which worked fine for us till we were using XUnit.
https://learn.microsoft.com/en-us/azure/devops/test/associate-automated-test-with-test-case?view=azure-devops
Please do let me know in case you need any further information on this from my side. Any help will be greatly appreciated.
Sadly the association with test cases when using the SpecFlow+ Runner is not possible out of the box. There is no possibility to extend this behavior we are aware of.
But you could use SpecSync. It synchronizes your Scenarios into Test Cases and also provides the test execution results.
You can use the Azure test API's and add them in the Hooks with the correct order so test runs can be created, test results can be updated during test runs. You just need to understand how Azure test API's work and add the test case ID's to your scenario names.

Can Coded UI tests and MTM be used to create a test suite that will automatically play all test cases?

When creating a test suite in selenium ide it is possible to let all test cases in a test suite run in a continuous manner and see results when finished. I'm looking into creating test suites in Microsoft test manager and possibly automating with the code with cuit, my question is, is it possible to run the tests one after another with no manual interaction, as from what I've seen so far, it seems you have to manually verify the test results in each step for MTM tests and manually verify the pass or fail status at the end of the test?
You can create a test case and tie an automated test case (Selenium/CUIT) to it in Visual Studio. This flips a flag in the test case work item to "automated", and allows you to automatically execute those test cases on test agents.
https://msdn.microsoft.com/en-us/library/dd380741.aspx

Executable requirements, PHPUnit and Jenkins strategy

We use Jenkins and PHPUnit in our development. For long time I wanted to start to use executable requirements in our team. Architect/team leader/someone who defines requirements can write tests before actual code. Actual code then should be written by another team member. Therefore executable requirements tests are committed to repository before actual code is made, Jenkins rebuilds the project and rightfully fails. And project remains in failed state until new code is written which defeats XP rule to keep project in good state at all times.
Is there any way to tell PHPUnit that such and such tests should not be run under Jenkins while they may be executed locally by any dev with ease? Tweaking phpunit.xml is not really desirable: local changes to tests are better as they easier to keep track of.
We tried markTestIncomplete() and markTestSkipped() but they not really do what we need: executable requirements tests are really complete and should not be skipped. Use of these functions prevent easy execution of such tests in development.
The best approach in our case would be to have PHPUnit option like --do-not-run-requirements which should be used by PHPUnit executed by Jenkins. On dev machine this option should not be used and actual executable requirements tests should have #executableRequirements meta-comment in the beginning (removed only after actual code is created and tested). Issue that PHPUnit does not have such functionality.
May be there is a better way to achieve implementation of executable requirements without "false" failures in Jenkins?
With PHPUnit, tests can be filtered for execution. Either annotate tests that should not be executed in one environment using the #group annotation and then use --exclude-group <name-of-group> or the <group> element of PHPUnit's XML configuration file or the --filter <pattern> commandline option. Both approaches are covered in the documentation.
For long time I wanted to start to use Test Driven Development in our
team. I don't see any problem with writing tests before actual code.
This is not TDD.
To quote from wikipedia:
first the developer writes an (initially failing) automated test case
that defines a desired improvement or new function, then produces the
minimum amount of code to pass that test, ...
Notice the test case in the singular.
Having said that, you are quite welcome to define your own development methodology whereby one developer write tests in the plural, commits them to version control and another developer writes code to satisfy the tests.
The solution to your dilemma is to commit the tests to a branch and the other developer work in that branch. Once all the tests are passing, merge with trunk and Jenkins will see the whole lot and give its opinion on whether the tests pass or not.
Just don't call it TDD.
I imagine it would not be very straight forward in practice to write tests without any basic framework. Hence, 'minimum amount of code to pass the test' approach as you suggested is not a bad idea.
Not necessarily a TDD approach
-Who writes the tests? If someone who works with requirements or an QA member writes the tests, you could probably simply write empty tests (so they don't fail). This approach will make sure that the developer will cover all the cases that the other person has thought about. An example test method would be public void testThatObjectUnderTestReturnsXWhenACondition, public void testThatObjectUnderTestReturnsZWhenBCondition. (I like long descriptive names so there are no confusions as to what I am thinking or you can use comments to describe your tests). The DEVs can write code and finish the tests or let someone else finish the tests later. Another way of stating this is to write executable requirements. See Cucumber/Steak/JBehave as executable requirements tools.
Having said above, we need to differentiate whether you are trying to write executable requirements or unit/integration/acceptance tests.
If you want to write executable requirements, any one can write it and could be empty to stop them from failing. DEVs will then fill it up and make sure the requirements are covered. My opinion is to let the DEVs deal with unit /integration/acceptance tests using TDD (actual TDD) and not separate the responsibility of writing code and appropriate unit/integration/acceptance tests for the code they write.

Can not run single KIFTestCase (subclass of XCTestCase)

I am new to automated testing.
I try to do automated integration testing of my app with Kif framework to facilitate testing before releases. I have several test cases. When i run testing (Cmd + U) this test cases runs but in strange sequence (not in alphabetically sorted order). I also can not run single test case, when i try to do so random test case runs before test case i want to run.
P.S. Some of my test cases inherit more general test cases.
Can you give me any hints what it can be?
Thanks!
AFAIK, test cases have no defined order and they should be independent of one another. If you have unit tests that depend on execution order, you're doing testing incorrectly and need to refactor your tests to be independent.

Parameterized Functional Tests using TFS / Testing Center?

I'm trying to leverage the functionality of the TFS Test Case, which allows a user to add parameters to a test case. However, when I set up a plain vanilla Unit Test (which will become my functional / integration test), and use the Insert Parameter feature, I just don't seem to be able to access the parameter data. From the little I can find, it seems as if this parameterization is only for coded UI tests.
While it's possible for me to write a data driven unit test with the [DataSource] attribute on the test, this would mean a separate place to manage the data for the testing, potentially a new UI, etc. Not terrible but not optimal. What would be ideal is to manage everything through Testing Center but I cannot for the life of me find a description of how to get at that data inside the unit test.
Am I missing something obvious?
Either I didn't understand your question or maybe you answered it yourself :-). Let me explain:
Both Unit Tests and Coded UI Tests (in fact, most MSTest-based tests) leverage the same [DataSource] infrastructure. That way, tests can be parameterized without the need of embedding the parameter data in the test itself.
VS 2005 and VS 2008 basically offered databases (text, XML or relational ones) as valid test data sources. VS 2010 (and Microsoft Test Manager) introduced a new kind of data source: a "Test Case Data Source", which is automatically inserted in a Coded UI test generated from a test case recording.
But nothing prevents you from doing the same to your own unit tests. I think the workflow below could work for you:
Create a test case in MTM;
Add your parameters and data rows;
Save your test case. Take note of the work item ID (you're gonna need it);
Create your unit test and add the following attribute to the method header:
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "http://my-tfs-server:8080/tfs/my-collection;My-Team-Project", "WI#", DataAccessMethod.Sequential), TestMethod]
In the attribute above, replace WI# with the work item id from #3;
(Optional) In Visual Studio, go to the Test menu and click Windows | Test View. Select the unit test you just created, right-click it and "Associate Test to Test Case". Point to the same test case work item created in #3 and now you turned your manual test case in a automated test case. NOTE: When you automate a test you can no longer run it manually from MTM. You need Lab Management (and an environment configured as being able to run automated tests) in order to schedule and run an automated test case.

Resources