Run test plan against 3rd party versioned programs - tfs

Using Visual Studio Online I created a test plan for a program that was written by a different company that my company uses. We have a specific set of tests that need to be tested before we accept a new version of this program. So when I edit the test plan I would like to be able to manually select a build by typing in say version "1.0.1.195". Then when a newer version comes out I can just type in a newer version and retest using those same tests. However when I go to select a build TFS is filtering against my builds for my code. Is it possible to do what I'm asking using TFS?
EDIT
To answer a few of the questions in the comments I'll be a bit more descriptive of what I am doing. A 3rd party company made a program we use to test some hardware. Every now and then there is an update to that software. Since a few of us use this program to test out the hardware we need to know that the software can be installed with little to no down time while upgrading. So we came up with a small set of tests that we run the program through to make sure that we can test reliably. Those tests were written in a Word document, so I put them into MTM. Although I make some software that is related to this their software depends on mine. I've not had to update my code for some time now. My overall intention is to use MTM to document my testing of this program.
Do you want to store the version of the 3rd party component along with the test result of the test run it was tested with on TFS?
That would be nice. My ultimate end game is to put the results of said test back into that Word Document and make that available to those who don't have MTM installed (which is everyone). This way when a new version of the software is updated I can just go into MTM reset all my tests back to active update the version number and retest.

The build you set up in Microsoft Test Manager (MTM) defines where is the drop location containing your tests, not the application under test (it can be different if you build your tests using another build).
That's why you only can select one of your builds for your code.
What you are talking about is deployment.
That means you have to make sure the right version of the 3rd party program is deployed to the environment the tests are running on.
EDIT
What you need is a Test Configuration
Here you can find a is a good explanation how to create one: Test Configurations - specifying test platforms
The idea in your use case would be as following
(below I'm using terms described in the article mentioned above):
Create a Configuration Variable where you will store the current version of the 3rd party program
Create a Test Configuration and add this variable to it.
Set this Test Configuration as default test configuration for your test plan.
Be aware, if your test plan already contains test cases you will have to add this Test Configuration to each Test Case manually since only new added Test Cases get it assigned automatically
If you get a new version of the 3rd party program you will:
Add the new version of the program to the values allowed for the Configuration Variable
Open the Test Configuration you are using and update the program's version to the new one.
Reset your tests and run them.
Doing so you:
store all versions you have tested so far in the Configuration Variable since you add the new one instead of overwrite the old one, so you get a kind of history.
store the last version you have tested in the Test Configuration.
That should meet you needs.
Additional suggestion
(has nothing to do with your question but with your use case)
Consider describing tests within your Test Cases instead of creating a Word document.
Here is a good place to start reading: How to: Create a Manual Test Case
The benefits would be:
You can run your manual tests using Test Runner provided by MTM
Doing so you will have all steps you have done stored by the Test Result, you can add comments to each step when executing that, etc.
You can still export the test description to a Word document using this MTM add-on: Test Scribe.
Using this add-on you can also create a report of your test execution.
Additionally, if you are going to use MTM more in your daily job I would recommend you this free e-book Testing for Continuous Delivery with Visual Studio 2012

Related

Which tests should be run since a previous TFS build?

My managers want we to determine which tests might have to be run, based on coding changes that were made to the application we are testing.
But, it is hard to know which tests are actually needed to be re-verified as a result of a code change. What we have done is common to test the entire area where the code change occurred / or the entire proj, solution.
We were told this could be achieved by TFS build or MTM tools. Could someone share the details?
PM:We are running on TFS 2015 update4,VS2017.
There is a concept of Test Impact Analysis which helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
The Test Impact Analysis (TIA) feature specifically enables this – TIA
is all about incremental validation by automatic test selection. For a
given code commit entering the pipeline TIA will select and run only
the relevant tests required to validate that commit. Thus, that test
run is going to complete faster, if there is a failure you will get to
know about it faster, and because it is all scoped by relevance,
analysis will be faster as well.
Test Impact Analysis for managed automated tests is available via a checkbox in the 2.* preview version of the VSTest task.
If enabled, only the relevant set of managed automated tests that need to be run to validate a given code change will run. Test Impact Analysis requires the latest version of Visual Studio, and is presently supported in CI for managed automated tests.
However this is only available with TFS2017 update1(need 2.* preview version of VSTS task). More details please refer this blog: Accelerated Continuous Testing with Test Impact Analysis

How to adjust test configuration in TFS 2015 Visual Studio Test task

I'm trying to setup our TFS 2015 server to run automated tests. I've got it running, but we need to run our tests in Debug mode (for various reasons I can't really adjust). The problem is that I can't seem to figure out a way to switch the configuration in the Test task.
The help that the task links to (here) says that it's as easy as selecting Platform and Configuration, but the problem is that those options don't exist for me (they exist under Reporting, but the help there suggests that they will simply compare the results to other builds with that configuration).
I've also investigated the vstest.console.exe parameters (help I found was this one) as well as modifying the runsettings file, but these only allow me to modify the platform.
Overall, my question is a)is there a reason why I don't see the Platform/configuration options in TFS, and b) given that I don't see them, how can I modify the configuration that the tests are running under?
If it helps, TFS is reporting the version as Version 14.95.25122.0, which corresponds to Update 2. I checked the logs for 2.1 and 3, but wasn't able to find anything that suggested that this was added in later versions (though I could be wrong).
UPDATE:
I've realized that I misread the Test documentation and that the Platform/Configuration options were always for reporting only.
My question is then if I can actually set this in the tests somehow.
Thank you very much for any help.
Assuming you want to compile your test project in Debug mode. You can add a VS Build step to specify the BuildConfiguration variable, and define debug for variable BuildConfiguration. Check the screenshots below:
Then in VS Test step, specify the Test Assembly as **\$(BuildConfiguration)\*test*.dll to test the assemble under Debug folder:

Run specific test with every integration in TFS

How to make specific test cases run when changes are made to a specific component in TFS?
For example- I have 10 test cases for component A and 10 test cases for component B. When a developer merges a code reated to component A, I want only the test cases related to component A to run
What you need is Test Impact Analysis. Described as this MSDN article,
Test Impact Analysis (TIA) helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
To enable test impact analysis in a build process, you need to:
1). Configure test impact analysis in a test settings file. Check the "Enabling Test Impact Collection" part of this article.
2). Specify to use the testsetting in the build definition.
3). Set Analyze Test Impact to be true. Check the Enable Test Impact Analysis part of this article for the details.
Additionally, it is possible for you customize your build process template to only run impacted tests: http://blog.robmaherconsulting.co.nz/2011/03/tfs-2010-build-only-run-impacted-tests.html

How to run MTM tests on multiple product builds?

We have MTM tests running on Release build of our product (Desktop Application).
Now we want the same tests to run on two product builds: Beta and Release.
When a test run is initiated from MTM (or tcm), we need a way to pass a 'value' to the test run telling it which version/build of the product it needs to test. This 'value' will then be read in the test method and correct decision will be taken while the tests are executing (like installation path, test results file updates etc).
Is there any way to achieve this? in TFS or MTM?
Consider using Test Settings.
If you start an automated tests from MTM you can specify Test Settings to use when running this tests.
In "Advanced" part of Test Settings you can specify scripts to run on your environment before running the tests.
Create two scripts, one for Release and one for Beta version. These scripts could create a file with particular content, set an environment variable or do something else that can then be checked by your test, when it’s running.
Create two Test Settings, one for Release and one for Beta version and
set up appropriate script to run for each Test Settings.
Use one of these Test Settings when starting tests.
This way you could pass information to your test.
We also faced similar problem in our project. We decided to modify the build definition template to take product build type (Beta or RTM or Release) as an input parameter. Using this value during TFS build, we can either update the TFS build name to reflect the product build type or create a file (xml) as part of TFS build process to contain this type detail.
See here for more detail on how to add Arguments and Parameters to build definition: http://www.ewaldhofman.nl/post/2010/04/27/Customize-Team-Build-2010-e28093-Part-2-Add-arguments-and-variables.aspx
Pls take a look at the below link, if it can be used to suit your needs.
http://blogs.infosupport.com/switching-browser-in-codedui-or-selenium-tests-based-on-mtm-configuration/
one question: Are you using Build-Deploy-Test flow to install the product on the environment or doing it any other way?
So, when you select to run a set of automated tests and pick the build from the drop down list this tells MTM which drop folder to go look in. So if your configuration is code, as it should be, then you can set this up to be automatic.
It is not possible to pass additional variables when you start a test run in MTM.
You could setup your tests to run from the Release Management tool instead. You would then be able to configure the environment however you like based on passed in veriables.
http://nakedalm.com/execute-tests-release-management-visual-studio-2013/

Parameterized Functional Tests using TFS / Testing Center?

I'm trying to leverage the functionality of the TFS Test Case, which allows a user to add parameters to a test case. However, when I set up a plain vanilla Unit Test (which will become my functional / integration test), and use the Insert Parameter feature, I just don't seem to be able to access the parameter data. From the little I can find, it seems as if this parameterization is only for coded UI tests.
While it's possible for me to write a data driven unit test with the [DataSource] attribute on the test, this would mean a separate place to manage the data for the testing, potentially a new UI, etc. Not terrible but not optimal. What would be ideal is to manage everything through Testing Center but I cannot for the life of me find a description of how to get at that data inside the unit test.
Am I missing something obvious?
Either I didn't understand your question or maybe you answered it yourself :-). Let me explain:
Both Unit Tests and Coded UI Tests (in fact, most MSTest-based tests) leverage the same [DataSource] infrastructure. That way, tests can be parameterized without the need of embedding the parameter data in the test itself.
VS 2005 and VS 2008 basically offered databases (text, XML or relational ones) as valid test data sources. VS 2010 (and Microsoft Test Manager) introduced a new kind of data source: a "Test Case Data Source", which is automatically inserted in a Coded UI test generated from a test case recording.
But nothing prevents you from doing the same to your own unit tests. I think the workflow below could work for you:
Create a test case in MTM;
Add your parameters and data rows;
Save your test case. Take note of the work item ID (you're gonna need it);
Create your unit test and add the following attribute to the method header:
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "http://my-tfs-server:8080/tfs/my-collection;My-Team-Project", "WI#", DataAccessMethod.Sequential), TestMethod]
In the attribute above, replace WI# with the work item id from #3;
(Optional) In Visual Studio, go to the Test menu and click Windows | Test View. Select the unit test you just created, right-click it and "Associate Test to Test Case". Point to the same test case work item created in #3 and now you turned your manual test case in a automated test case. NOTE: When you automate a test you can no longer run it manually from MTM. You need Lab Management (and an environment configured as being able to run automated tests) in order to schedule and run an automated test case.

Resources