Run specific test with every integration in TFS - tfs

How to make specific test cases run when changes are made to a specific component in TFS?
For example- I have 10 test cases for component A and 10 test cases for component B. When a developer merges a code reated to component A, I want only the test cases related to component A to run

What you need is Test Impact Analysis. Described as this MSDN article,
Test Impact Analysis (TIA) helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
To enable test impact analysis in a build process, you need to:
1). Configure test impact analysis in a test settings file. Check the "Enabling Test Impact Collection" part of this article.
2). Specify to use the testsetting in the build definition.
3). Set Analyze Test Impact to be true. Check the Enable Test Impact Analysis part of this article for the details.
Additionally, it is possible for you customize your build process template to only run impacted tests: http://blog.robmaherconsulting.co.nz/2011/03/tfs-2010-build-only-run-impacted-tests.html

Related

TFS on Automation Tests Suits - How to Run

I have 2 executables, an app and an automation test app who will perform actions on the app. My automation tests are basically NUNIT tests who calls Chrome Web Driver.
Everything is hosted at TFS. In my build definition, i run sanity checks for every PR. I wanted to expand that. The automation tests are divided into many different categories (being Sanity a category). I've been seeing some stuff related to TFS Test Suits, and my idea was to make in a way whenever when someone makes a PR, he could choose some test category to run on that PR using that build. So in a easyer way of sayng, if my PR changes how 'blue buttons work' ill run the 'blue buttons test suit' on my pr.
Would using Test Suits be the best solution for this ? Has any1 done this or have any nice information on how to achieve this ?
Thanks for any responses !! Best regards !
You could be able to use the Visual Studio Test Agent Deploy and Run Functional tests steps in your build definition to run auto tests on build agents.
Associated test methods with test cases in Visual Studio.
Create a build definition to build your project and add the 2 steps I mentioned above. In the Run Functional Test step, select the test suites which contains those test cases in step1.
More details please refer this blog: Executing Automated tests in Build vNext using Test Plan, Test Suites

Which tests should be run since a previous TFS build?

My managers want we to determine which tests might have to be run, based on coding changes that were made to the application we are testing.
But, it is hard to know which tests are actually needed to be re-verified as a result of a code change. What we have done is common to test the entire area where the code change occurred / or the entire proj, solution.
We were told this could be achieved by TFS build or MTM tools. Could someone share the details?
PM:We are running on TFS 2015 update4,VS2017.
There is a concept of Test Impact Analysis which helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
The Test Impact Analysis (TIA) feature specifically enables this – TIA
is all about incremental validation by automatic test selection. For a
given code commit entering the pipeline TIA will select and run only
the relevant tests required to validate that commit. Thus, that test
run is going to complete faster, if there is a failure you will get to
know about it faster, and because it is all scoped by relevance,
analysis will be faster as well.
Test Impact Analysis for managed automated tests is available via a checkbox in the 2.* preview version of the VSTest task.
If enabled, only the relevant set of managed automated tests that need to be run to validate a given code change will run. Test Impact Analysis requires the latest version of Visual Studio, and is presently supported in CI for managed automated tests.
However this is only available with TFS2017 update1(need 2.* preview version of VSTS task). More details please refer this blog: Accelerated Continuous Testing with Test Impact Analysis

Run test plan against 3rd party versioned programs

Using Visual Studio Online I created a test plan for a program that was written by a different company that my company uses. We have a specific set of tests that need to be tested before we accept a new version of this program. So when I edit the test plan I would like to be able to manually select a build by typing in say version "1.0.1.195". Then when a newer version comes out I can just type in a newer version and retest using those same tests. However when I go to select a build TFS is filtering against my builds for my code. Is it possible to do what I'm asking using TFS?
EDIT
To answer a few of the questions in the comments I'll be a bit more descriptive of what I am doing. A 3rd party company made a program we use to test some hardware. Every now and then there is an update to that software. Since a few of us use this program to test out the hardware we need to know that the software can be installed with little to no down time while upgrading. So we came up with a small set of tests that we run the program through to make sure that we can test reliably. Those tests were written in a Word document, so I put them into MTM. Although I make some software that is related to this their software depends on mine. I've not had to update my code for some time now. My overall intention is to use MTM to document my testing of this program.
Do you want to store the version of the 3rd party component along with the test result of the test run it was tested with on TFS?
That would be nice. My ultimate end game is to put the results of said test back into that Word Document and make that available to those who don't have MTM installed (which is everyone). This way when a new version of the software is updated I can just go into MTM reset all my tests back to active update the version number and retest.
The build you set up in Microsoft Test Manager (MTM) defines where is the drop location containing your tests, not the application under test (it can be different if you build your tests using another build).
That's why you only can select one of your builds for your code.
What you are talking about is deployment.
That means you have to make sure the right version of the 3rd party program is deployed to the environment the tests are running on.
EDIT
What you need is a Test Configuration
Here you can find a is a good explanation how to create one: Test Configurations - specifying test platforms
The idea in your use case would be as following
(below I'm using terms described in the article mentioned above):
Create a Configuration Variable where you will store the current version of the 3rd party program
Create a Test Configuration and add this variable to it.
Set this Test Configuration as default test configuration for your test plan.
Be aware, if your test plan already contains test cases you will have to add this Test Configuration to each Test Case manually since only new added Test Cases get it assigned automatically
If you get a new version of the 3rd party program you will:
Add the new version of the program to the values allowed for the Configuration Variable
Open the Test Configuration you are using and update the program's version to the new one.
Reset your tests and run them.
Doing so you:
store all versions you have tested so far in the Configuration Variable since you add the new one instead of overwrite the old one, so you get a kind of history.
store the last version you have tested in the Test Configuration.
That should meet you needs.
Additional suggestion
(has nothing to do with your question but with your use case)
Consider describing tests within your Test Cases instead of creating a Word document.
Here is a good place to start reading: How to: Create a Manual Test Case
The benefits would be:
You can run your manual tests using Test Runner provided by MTM
Doing so you will have all steps you have done stored by the Test Result, you can add comments to each step when executing that, etc.
You can still export the test description to a Word document using this MTM add-on: Test Scribe.
Using this add-on you can also create a report of your test execution.
Additionally, if you are going to use MTM more in your daily job I would recommend you this free e-book Testing for Continuous Delivery with Visual Studio 2012

Test Impact Analysis & Ms build execute only Impacted Test

I have a TFS build in VS2010. Following the build unit tests are executed.
In the Build summary it tells me that "1 test run(s) completed - 100% average pass rate" but below this it states "No tests were impacted".
I guess Impacted Tests relate to functionality providing the ability to only run tests that were impacted by code checked in?
Or is there a way that i can run only tests which where impacted based on result of Test Impact Analysis.
I have Set "Analyze Test Impact" to True but still no result coming and its executing all test cases in test projects.
The following thing worked for me.
Microsoft.TeamFoundation.TestImpact
http://scrumdod.blogspot.in/2011/03/tfs-2010-build-only-run-impacted-tests.html

Parameterized Functional Tests using TFS / Testing Center?

I'm trying to leverage the functionality of the TFS Test Case, which allows a user to add parameters to a test case. However, when I set up a plain vanilla Unit Test (which will become my functional / integration test), and use the Insert Parameter feature, I just don't seem to be able to access the parameter data. From the little I can find, it seems as if this parameterization is only for coded UI tests.
While it's possible for me to write a data driven unit test with the [DataSource] attribute on the test, this would mean a separate place to manage the data for the testing, potentially a new UI, etc. Not terrible but not optimal. What would be ideal is to manage everything through Testing Center but I cannot for the life of me find a description of how to get at that data inside the unit test.
Am I missing something obvious?
Either I didn't understand your question or maybe you answered it yourself :-). Let me explain:
Both Unit Tests and Coded UI Tests (in fact, most MSTest-based tests) leverage the same [DataSource] infrastructure. That way, tests can be parameterized without the need of embedding the parameter data in the test itself.
VS 2005 and VS 2008 basically offered databases (text, XML or relational ones) as valid test data sources. VS 2010 (and Microsoft Test Manager) introduced a new kind of data source: a "Test Case Data Source", which is automatically inserted in a Coded UI test generated from a test case recording.
But nothing prevents you from doing the same to your own unit tests. I think the workflow below could work for you:
Create a test case in MTM;
Add your parameters and data rows;
Save your test case. Take note of the work item ID (you're gonna need it);
Create your unit test and add the following attribute to the method header:
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "http://my-tfs-server:8080/tfs/my-collection;My-Team-Project", "WI#", DataAccessMethod.Sequential), TestMethod]
In the attribute above, replace WI# with the work item id from #3;
(Optional) In Visual Studio, go to the Test menu and click Windows | Test View. Select the unit test you just created, right-click it and "Associate Test to Test Case". Point to the same test case work item created in #3 and now you turned your manual test case in a automated test case. NOTE: When you automate a test you can no longer run it manually from MTM. You need Lab Management (and an environment configured as being able to run automated tests) in order to schedule and run an automated test case.

Resources