What is the Test Plan on TFS? - tfs

If you woked on TFS on Microsoft platform, you migh see Test Plan on TFS. What is the Test Plan on TFS?
We are creating test projects on development platform (visual studio). But I could not understand TFS Test Plan. Is this about code?

A Test Plan is part of Microsoft Test Manager, which is a tool for creating and running manual tests, and managing test/lab environments. Nothing to do with code, unless you're converting manual tests to automated tests (even then, the Test Plan isn't an important piece of that process).
A Test Plan is simply a container for Test Suites, which are in turn containers for Test Cases. Each level represents a finer degree of granularity, and each level has it's own associated metadata. This allows you to create sophisticated reports at various degrees of specificity.
Some examples of test plans:
In Scrum, teams typically have a single test plan per iteration
In traditional project test plans may be organized around functionality or layers
There is a lot of metadata (builds, configurations, environments etc) that you can associate with a test plan, but when you get right down to it they are just containers for test suites, which are containers for test cases.

A Test Plan is simply a set that contains Test Suites- Requirement/Static/Query based.
Each Test plan is made as per iteration.
Master Test Plan will contain all the test plans of the iterations.
Test Suites in Test Plans are the ones which contain a set of Test Cases. T
Some examples of test plans:
In Scrum, for each iteration, one test plan is to be created.

Related

Create a Child Test Plan under a Master Test Plan

I am using Visual Studio 2016 with the latest updates. Though I am not the administrator of the project that I am working on, but I have access to creating as many number of Test Plans as required (though i can't delete them once created). I have already created a Master Test Plan, which has like 1000+ test cases, but then we do not need to run all of them everytime. So I was curious if I could create a child test plan under the Master one and include the test cases that are necessary. I only get the green '+'when trying to create a test plan , but nothing to create the child ones.I didn't find any online guide either.
Is it actually possible to create one in VSTS?
No, you cannot create the child test plan, it's not supported.
However you can create Test Suites under the test plan, then you can manage the test cases within the specific test suite.
Please see Create a test plan and test suite for detials.
And this article for your reference: Test Planning and Management with Visual Studio Team Services
Test Plan is the highest group level for tests. What I often do is create a Test Plan for each sprint for example. So Test Plan Sprint 1 contains all the test cases that are applicable to this sprint. At the end of a sprint/start of a new sprint, you clone your current test plan and then modify it for the new sprint.
A Test Plan does not contain test cases directly (it's possible, just not recommended). Instead you use Test Suites to group the Test Cases. You have different types of Test Suites:
Static: you manually add Test Cases to a static suite
Requirement based: this allows you to create a suite that's linked to a Work Item. For example, you can define test cases that map to a Product Backlog Item that you're working on
Query based suite: select test cases based on a query. For example, all high priority tests or all tests having a specific tag.
A Test Case can belong to multiple suites. You can nest other suites in a Static Suite.
So in your scenario, you have 1000 test cases that you want to group in suites. If this is a manual process, you can use a static test suite and just add existing test cases one by one until the suite meets your needs. If you can create a query that selects the test cases you want to work with you can use a query based suite.
I would recommend tagging your Test Cases with something like 'Ready For Test' (or another label that makes sense in your scenario) and then use a Query based suite. This is easier to maintain and probably less work. Especially if you use the Bulk Edit options to quickly add the tags.

Managing the test cases , scenarios and feature files with specflow

I have number of different manual test cases which needs to be automated with the Specflow .
There are multiple test cases multiple scenarios. SO there will be multiple feature files ?
We are following Sprint system. each sprint has 100+ test cases which are going to be automated.
Which will be the best practice for managing the test cases and scenarios using the feature files ? Theres no point in creating same set of functions everytime for different test cases.
you would manage this the same as you would manage any other code files. Make changes, if the changes conflict with others changes then merge the changes before you check in.
The best way to avoid merge issues is to try and work in different areas. Create many feature files as then multiple people can work on different features at one time and you won't have conflicts.
Communication between the testers is important in avoiding conflicts as well, and in the case of scenarios in specflow it will be important in ensuring that you use consistent step names. Also checking in often will ensure that you minimise the number of merge issues, even after each scenario has been created.
EDIT
based on your edited question in specflow all steps are global, so if feature1 has a scenario with a step Given a user 'Bob' is logged in and Feature32 also has a scenarion with the step Given a user 'Tom' is logged in then they will both share the same step source code and so the same function will be reused.
As long as you write your steps in a consistent manner (ie use the same text) then you should get excellent reuse of functions across all of your hundreds of features and scenarios.

How to build dependent tests for regression testing

I have an ASP.Net MVC project and I thought I could use a tool like MS Test or NUnit to perform regression testing from the controller layer down to the database, however I hit an issue where tests are not designed to run in order (You can use ordered tests in MS Test, but the tests still run concurrently) and the other problem is how to allow the data created from one test accessible to another?
I have looked at Selenium and WatiN but I just wanted to write something that is not dependent on the UI layer which is most likely going to change an increase the amount of work to maintain the tests.
Any suggestions? Is it just the wrong tool for the job? Should I just use Selenium/WatiN?
Tests should always be independent of each other, so that running order doesn't matter. If your tests depend on other tests you are losing control of what you are testing.
WatiN, and I'm assuming Selenium, won't solve your ordering problem. I use WatiN and NUnit for UI automation and the running order is not guaranteed, which initially posed similar problems to what you're seeing.
In the vein of what dskh answered, you want independent tests, and I've done this in two ways for Integration / Regression black-ish box testing.
First: In your test setup, have any precondition data values setup so you're at a known "good state". For system regression test automation, I've got a number of database scripts that get called to reset data to a known state; this adds some dependencies so be conscious of the design. Note: In straight unit testing, look at using mock objects to take out dependencies and get your test to be "testing one thing". Mock objects, stubbing method calls, etc is the way to go if you can, which based on your question sounds likely.
Second: For cases where certain things absolutely had to be setup in a certain way, and scripting them to test setup added a ridiculous amount of necessary system internal knowledge (eg: all users setup + all permissions setup + etc etc etc) a small number of "bootstrap" tests were setup, in their own namespace to allow easy running via NUnit, to bootstrap the system. Keeping the number of tests small and making sure the tests were very stable was paramount. Now, on a fresh install, the bootstrap tests are run first and serve as advanced smoke tests; no further tests are run if any of the bootstrap tests fail. It is clunky in some ways, but the alternatives were clunkier or more time/resource/whatever consuming.
Update
The link below (and I assume the project) is dead.
Best option maybe using Selenium and the Page Object Model.
See here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089
Old Answer
The simplest solution I have found is Rob Conery's Qixote:
https://github.com/robconery/Quixote
It works by firing http requests and consuming responses.
Simple to set up and use and provides integration testing. This tool also allows a series of tests to be executed in order to create test dependencies.

When using Microsoft Test Manager 2010 with SfTS, can Acceptance Tests be reused for regression testing?

We are moving our projects to TFS 2010 using the SfTS v3 (Scrum for Team System) template. We need to understand how Microsoft Test Manager is supposed to be used in this Scrum process.
Specific scenario & question:
The QA manager uses Test Manager to create a test plan. This aligns to our sprint. In the sprint, he create Acceptance Test WIs (Work Items) that are core to the functionality. These pass for the specific sprint. On the next sprint, he makes another test plan. The Acceptance Test WIs from the previous sprint and needed again as part of the regression testing.
Does the Acceptance Test (AC) state have to be "passed" so the Product Backlog Item (PBI) can be closed? Can the AC WI be reused between sprints?
Does the Acceptance Test (AC) state have to be "passed" so the Product
Backlog Item (PBI) can be closed?
Yes
Can the AC WI be reused between sprints?
Of course for regression and to do this you will need to copy the test cases from sprint to another sprint but remember this will not make a deep copy and will make reference for the previous sprint in reporting, you can use the following tool for bulk copy
http://visualstudiogallery.msdn.microsoft.com/72576517-821b-46c2-aa1a-fab940752292
Visual Studio 11 Beta will support bulk deep copy without reference and it's available now for production, so I recommended using it.
http://www.microsoft.com/visualstudio/11/en-us/downloads

is there an easy free way to create test case management in Jira

I think the only part I dont get is how you handle the run results. So if I set up a new project in Jira for test cases how would I make it so I can run mark a test case as pass or fail but not close out the jira.
So I basically want the original jira to be always open then be able to mark it passed or failed against a specific release. the original jira should stay unchanged just somehow log a result set?
I do not have bamboo
that make any sense
We have setup a simple custom workflow in Jira without using Confluence.
We added one new issue type - Test Case. And we have a new sub-task - Test Run.
Test Case has only three workflow actions: Pass, Fail and Invalid (the last one is to make Test Case redundant). And two statuses - Open and Invalid.
Test Run is automatically created when Test Case passes or fails. Users do not manually create test runs. We use one of the plugins to create a subtask on transition.
Test Run can be in a Passed or Failed state and has version info, user who passed or failed and a comment.
This is it.
Here are some links that I used to setup Jira for Test Case Management:
Test Case Management in Jira
Using Jira for Test Case Manangement
Create On Transition Plugin
The approach we are following is as follows
We use Confluence for implementing our test cases.
Each test case has its own page describing the setup, the scenario to run and all possible outcomes.
We have a test library page which is the parent of all these test cases.
When we want to start a validation cycle on a particular release, we use a script which
generates for each test case in confluence, a corresponding 'test run' issue.
(#DennisG - JIRA allows to define different issue types, each with its own workflow)
The summary is the summary of the testcase
The description is the scenario and outcome of the testcase
We have a specific confluence link referring the testcase
The testrun issue workflow contains 4 stages
Open
In Progress
Blocked
Closed
And 3 resolutions
Success
Failure
Review testcase
We then start validating all 'test run' isuses.
Using dashboard gadgets it is easy to see how many testcases still need to be run, how many are blocked, how many have been done, and how many have failed ...
In case the resolution is 'review testcase' we have the ability to adapt the testcase itself.
Conclusion - JIRA is certainly usable as a test execution management environment. Confluence,
as a wiki provides an environment to build the necessary hierarchies (technical, functional).
Last point.
We start to extensively use Bonfire (a plugin for JIRA)
http://www.atlassian.com/en/software/bonfire
This shortens the 'manual' testing cycle considerably.
For us it had an ROI of a couple of weeks.
Hope this helps,
Francis
PS. If you're interested to use the script send me a note.
We are using this test case management called informup.
The test case management is integrates with Jira.
In addition it has fully integration in the system so in case you want to use it as a test case management and a bug tracking system you can do it as well
you can use PractiTest, a test management tool that integrates with JIRA. PractiTest covers your entire QA process, so you can use it to create Requirements, Tests and Test sets, and use the integration option to report issues in JIRA. you can also link between the different entities.
read more about PractiTest's integration with JIRA
To be honest, I'm not sure that using JIRA (or any other bug/issue tracking tool) as a test management tool is a good idea. The problem with this is that issue trackers usually have a single main entity (the issue), whereas test management tools usually distinguish between test cases and actual tests/results. This way you can easily reuse the same test case for different releases and also store a history of test results. Additional entities such as test runs and test suites also usually make it a lot easier to manage and track your data. So instead of using Jira for test management, you might want to consider using a dedicated test management software that integrates with Jira. There are many test management tools out there, including open source projects:
http://www.opensourcetestmanagement.com/
You could also take a look at our tool TestRail, which also comes with Jira integration:
http://www.gurock.com/testrail/
http://www.gurock.com/testrail/jira-test-management.i.html
Have you tried looking in Jira's plugin directory at https://plugins.atlassian.com to see whats available to extend the core functionality. There may be something there that you could be installed.
There are tools out there that combine both issue tracking and test management (e.g. elementool.com), however if you are after a more feature rich issue tracking experience, you may need to start looking at dedicated tools.
If after looking around you find that there are no suitable solutions to enable you to have things in one place, you may want to take a look at TestLodge test case management, which is a tool I have developed that integrates easily with Jira.
Why not just integrate JIRA with a test management tool? So, for example, we use Kualitee. It integrates with JIRA and provides traceability from the defect reported to the underlying test case and requirements. So, you can run your entire QA cycle on Kualitee and sync and assign defects to your team in JIRA.
You can also generate your test case execution reports and bug reports and export them so you are all set on that front as well.

Resources