Parameterized Functional Tests using TFS / Testing Center? - tfs

I'm trying to leverage the functionality of the TFS Test Case, which allows a user to add parameters to a test case. However, when I set up a plain vanilla Unit Test (which will become my functional / integration test), and use the Insert Parameter feature, I just don't seem to be able to access the parameter data. From the little I can find, it seems as if this parameterization is only for coded UI tests.
While it's possible for me to write a data driven unit test with the [DataSource] attribute on the test, this would mean a separate place to manage the data for the testing, potentially a new UI, etc. Not terrible but not optimal. What would be ideal is to manage everything through Testing Center but I cannot for the life of me find a description of how to get at that data inside the unit test.
Am I missing something obvious?

Either I didn't understand your question or maybe you answered it yourself :-). Let me explain:
Both Unit Tests and Coded UI Tests (in fact, most MSTest-based tests) leverage the same [DataSource] infrastructure. That way, tests can be parameterized without the need of embedding the parameter data in the test itself.
VS 2005 and VS 2008 basically offered databases (text, XML or relational ones) as valid test data sources. VS 2010 (and Microsoft Test Manager) introduced a new kind of data source: a "Test Case Data Source", which is automatically inserted in a Coded UI test generated from a test case recording.
But nothing prevents you from doing the same to your own unit tests. I think the workflow below could work for you:
Create a test case in MTM;
Add your parameters and data rows;
Save your test case. Take note of the work item ID (you're gonna need it);
Create your unit test and add the following attribute to the method header:
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "http://my-tfs-server:8080/tfs/my-collection;My-Team-Project", "WI#", DataAccessMethod.Sequential), TestMethod]
In the attribute above, replace WI# with the work item id from #3;
(Optional) In Visual Studio, go to the Test menu and click Windows | Test View. Select the unit test you just created, right-click it and "Associate Test to Test Case". Point to the same test case work item created in #3 and now you turned your manual test case in a automated test case. NOTE: When you automate a test you can no longer run it manually from MTM. You need Lab Management (and an environment configured as being able to run automated tests) in order to schedule and run an automated test case.

Related

TFS visual studio test "Test Filter criteria" and xunit Traits

I'm trying to filter out some unit test marked with the xunit trait category. On my build server I don't want unit test to run with category integration
[Trait("Category", "Integration")]
When I don't use the 'Test Filter criteria' in TFS VS test step, the unit test all get executed. But when I apply the desired filter
Category!=Integration
It doesn't run any tests at all. I've tried all variations but no success. I also tried using the 2.* version of the test task.
Instead of using [Trait("Category", "Integration")], use [Trait("TestCategory", "Integration")]. XUnit doesn't care what you put in for these key-value pairs, but the TFS Test runner task DOES. The Test Filter Criteria box only works with a prescribed set of attributes. I have this working for our builds. Ran into the same problem you are describing. After analyzing the build output, (and getting ALL test projects switched to XUnit - that is also a key... don't have a mix of MSTest and XUnit).
Now (Februaray 2022), this is not problem anymore. However, I have had some issues getting it working and I have ended up here anyway, so just couple of points:
Must be in such configuration built where the test project is really built (e.g. in Release the test project is often not built)
Multiple detected test dlls seem to cause trouble (better to filter just really those you want)
The name of the trait (and the value ) can be arbitrary (16.8 and 17.0.0 at least), i.e. the text in Test Filter Criteria does NOT need anymore to be some of the vstest standard names, like TestCategory

How do I customize the Test Result Work Item Template Definition in TFS 2015?

My question is simple: I have used WITAdmin in the past to add custom fields to various work item types in TFS. Most recently - to add time estimates to the Test Case work item template in TFS. However, when I use WitAdmin listwitd command on my project, I don't see anything for Test Result.
Is there any way to customize the work item template for test results in TFS? I want to add an additional test result steps beyond the ones that are already out of the box, i.e. pass, fail, blocked, not applicable.
Thanks,
There is no work item type which called test result in TFS. You can customize the Test Plan & Test suite & Test Case and add custom fields or define custom workflows to it, just without test result.
The test result is associated with MTM. If you want to customize Test Result Failure Type & Resolution Type, please refer the link from MSDN: Customize and manage the test experience [tcm and Microsoft Test Manager]
For more info, please take a look at this uservoice: Provide customization for test plan, test results.

Run specific test with every integration in TFS

How to make specific test cases run when changes are made to a specific component in TFS?
For example- I have 10 test cases for component A and 10 test cases for component B. When a developer merges a code reated to component A, I want only the test cases related to component A to run
What you need is Test Impact Analysis. Described as this MSDN article,
Test Impact Analysis (TIA) helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
To enable test impact analysis in a build process, you need to:
1). Configure test impact analysis in a test settings file. Check the "Enabling Test Impact Collection" part of this article.
2). Specify to use the testsetting in the build definition.
3). Set Analyze Test Impact to be true. Check the Enable Test Impact Analysis part of this article for the details.
Additionally, it is possible for you customize your build process template to only run impacted tests: http://blog.robmaherconsulting.co.nz/2011/03/tfs-2010-build-only-run-impacted-tests.html

Automated application testing with TFS

I think I'm missing a link somewhere in how microsoft expect TFS and automated testing to work together. TFS allows us to create test cases that have test steps. These can be merged into a variety of test plans. I have this all setup and working as I would expect for manual testing.
I've now moved into automating some of these tests. I have created a new visual studio project, which relates to my test plan. I have created a test class that relates to the test case and planned to create a test method for each test step within the test class, using the ordertest to ensure that the methids are executed in the same order as the test steps.
I'd hoped that I could then link this automation up to the test case so that it could be executed as part of the test plan.
This is when it all goes wrong, It is my understanding that the association panel appears to only hook a test case up to a particular test method, not a test step?
Is my understanding correct?
Have MS missed a trick here and made things a litte too complicated or have I missed something? If I hook things upto a whole test case to a method I lose granulaity of what each is doing.
If each test step was hooked into a test method it would be possible for the assert of the test method to register a pass or fail of the overall test case.
Any help or direction so that I can improve my understanding would be appreciated.
The link is not obvious. In Visual Studio Team Explorer create and run a query to find the test case(s). Open the relevant test case and view the test automation section. On the right hand side of the test automation line there should be an ellipsis, click it and link to the test case.
I view this as pushing an automated test from Visual Studio. Confusingly you cannot pull an automated test into MTM.
You can link only one method to a test case. That one method should cover all the steps written in its associated test case including verification(assertions).
If it is getting impossible to cover all steps in one test method or if your have too many verifications is your test case, then the test case needs to be broken down to smaller test cases and each of the test cases will have one automated method associate with it.
Automation test should work like this. (Not a hard rule though..)
Start -> Do some action -> Verify (Assert) -> Finish
You can write as many assertions as possible, but if first assert fails then test won't proceed further to do other assertions. This is how manual testing also works, ie Test fails even if 1 step out of 100 fails.
For the sake of automation test maintainability it is advisable to add minimum asserts in automation test and easiest way to achieve this is by splitting the test case. Microsoft or other test automation provider works this way only and we don't write test methods for each and every steps. This would make things very complicated.
But yes, you can write reusable methods(not test methods) in your test framework for each steps and call them in your test methods. For example you don't have to write code for a test case step say "Application Login" again and again. You can write your method separately and call that in your test method which is linked to the test case.

Run test plan against 3rd party versioned programs

Using Visual Studio Online I created a test plan for a program that was written by a different company that my company uses. We have a specific set of tests that need to be tested before we accept a new version of this program. So when I edit the test plan I would like to be able to manually select a build by typing in say version "1.0.1.195". Then when a newer version comes out I can just type in a newer version and retest using those same tests. However when I go to select a build TFS is filtering against my builds for my code. Is it possible to do what I'm asking using TFS?
EDIT
To answer a few of the questions in the comments I'll be a bit more descriptive of what I am doing. A 3rd party company made a program we use to test some hardware. Every now and then there is an update to that software. Since a few of us use this program to test out the hardware we need to know that the software can be installed with little to no down time while upgrading. So we came up with a small set of tests that we run the program through to make sure that we can test reliably. Those tests were written in a Word document, so I put them into MTM. Although I make some software that is related to this their software depends on mine. I've not had to update my code for some time now. My overall intention is to use MTM to document my testing of this program.
Do you want to store the version of the 3rd party component along with the test result of the test run it was tested with on TFS?
That would be nice. My ultimate end game is to put the results of said test back into that Word Document and make that available to those who don't have MTM installed (which is everyone). This way when a new version of the software is updated I can just go into MTM reset all my tests back to active update the version number and retest.
The build you set up in Microsoft Test Manager (MTM) defines where is the drop location containing your tests, not the application under test (it can be different if you build your tests using another build).
That's why you only can select one of your builds for your code.
What you are talking about is deployment.
That means you have to make sure the right version of the 3rd party program is deployed to the environment the tests are running on.
EDIT
What you need is a Test Configuration
Here you can find a is a good explanation how to create one: Test Configurations - specifying test platforms
The idea in your use case would be as following
(below I'm using terms described in the article mentioned above):
Create a Configuration Variable where you will store the current version of the 3rd party program
Create a Test Configuration and add this variable to it.
Set this Test Configuration as default test configuration for your test plan.
Be aware, if your test plan already contains test cases you will have to add this Test Configuration to each Test Case manually since only new added Test Cases get it assigned automatically
If you get a new version of the 3rd party program you will:
Add the new version of the program to the values allowed for the Configuration Variable
Open the Test Configuration you are using and update the program's version to the new one.
Reset your tests and run them.
Doing so you:
store all versions you have tested so far in the Configuration Variable since you add the new one instead of overwrite the old one, so you get a kind of history.
store the last version you have tested in the Test Configuration.
That should meet you needs.
Additional suggestion
(has nothing to do with your question but with your use case)
Consider describing tests within your Test Cases instead of creating a Word document.
Here is a good place to start reading: How to: Create a Manual Test Case
The benefits would be:
You can run your manual tests using Test Runner provided by MTM
Doing so you will have all steps you have done stored by the Test Result, you can add comments to each step when executing that, etc.
You can still export the test description to a Word document using this MTM add-on: Test Scribe.
Using this add-on you can also create a report of your test execution.
Additionally, if you are going to use MTM more in your daily job I would recommend you this free e-book Testing for Continuous Delivery with Visual Studio 2012

Resources