I am using Visual Studio 2016 with the latest updates. Though I am not the administrator of the project that I am working on, but I have access to creating as many number of Test Plans as required (though i can't delete them once created). I have already created a Master Test Plan, which has like 1000+ test cases, but then we do not need to run all of them everytime. So I was curious if I could create a child test plan under the Master one and include the test cases that are necessary. I only get the green '+'when trying to create a test plan , but nothing to create the child ones.I didn't find any online guide either.
Is it actually possible to create one in VSTS?
No, you cannot create the child test plan, it's not supported.
However you can create Test Suites under the test plan, then you can manage the test cases within the specific test suite.
Please see Create a test plan and test suite for detials.
And this article for your reference: Test Planning and Management with Visual Studio Team Services
Test Plan is the highest group level for tests. What I often do is create a Test Plan for each sprint for example. So Test Plan Sprint 1 contains all the test cases that are applicable to this sprint. At the end of a sprint/start of a new sprint, you clone your current test plan and then modify it for the new sprint.
A Test Plan does not contain test cases directly (it's possible, just not recommended). Instead you use Test Suites to group the Test Cases. You have different types of Test Suites:
Static: you manually add Test Cases to a static suite
Requirement based: this allows you to create a suite that's linked to a Work Item. For example, you can define test cases that map to a Product Backlog Item that you're working on
Query based suite: select test cases based on a query. For example, all high priority tests or all tests having a specific tag.
A Test Case can belong to multiple suites. You can nest other suites in a Static Suite.
So in your scenario, you have 1000 test cases that you want to group in suites. If this is a manual process, you can use a static test suite and just add existing test cases one by one until the suite meets your needs. If you can create a query that selects the test cases you want to work with you can use a query based suite.
I would recommend tagging your Test Cases with something like 'Ready For Test' (or another label that makes sense in your scenario) and then use a Query based suite. This is easier to maintain and probably less work. Especially if you use the Bulk Edit options to quickly add the tags.
Related
I have number of different manual test cases which needs to be automated with the Specflow .
There are multiple test cases multiple scenarios. SO there will be multiple feature files ?
We are following Sprint system. each sprint has 100+ test cases which are going to be automated.
Which will be the best practice for managing the test cases and scenarios using the feature files ? Theres no point in creating same set of functions everytime for different test cases.
you would manage this the same as you would manage any other code files. Make changes, if the changes conflict with others changes then merge the changes before you check in.
The best way to avoid merge issues is to try and work in different areas. Create many feature files as then multiple people can work on different features at one time and you won't have conflicts.
Communication between the testers is important in avoiding conflicts as well, and in the case of scenarios in specflow it will be important in ensuring that you use consistent step names. Also checking in often will ensure that you minimise the number of merge issues, even after each scenario has been created.
EDIT
based on your edited question in specflow all steps are global, so if feature1 has a scenario with a step Given a user 'Bob' is logged in and Feature32 also has a scenarion with the step Given a user 'Tom' is logged in then they will both share the same step source code and so the same function will be reused.
As long as you write your steps in a consistent manner (ie use the same text) then you should get excellent reuse of functions across all of your hundreds of features and scenarios.
If you woked on TFS on Microsoft platform, you migh see Test Plan on TFS. What is the Test Plan on TFS?
We are creating test projects on development platform (visual studio). But I could not understand TFS Test Plan. Is this about code?
A Test Plan is part of Microsoft Test Manager, which is a tool for creating and running manual tests, and managing test/lab environments. Nothing to do with code, unless you're converting manual tests to automated tests (even then, the Test Plan isn't an important piece of that process).
A Test Plan is simply a container for Test Suites, which are in turn containers for Test Cases. Each level represents a finer degree of granularity, and each level has it's own associated metadata. This allows you to create sophisticated reports at various degrees of specificity.
Some examples of test plans:
In Scrum, teams typically have a single test plan per iteration
In traditional project test plans may be organized around functionality or layers
There is a lot of metadata (builds, configurations, environments etc) that you can associate with a test plan, but when you get right down to it they are just containers for test suites, which are containers for test cases.
A Test Plan is simply a set that contains Test Suites- Requirement/Static/Query based.
Each Test plan is made as per iteration.
Master Test Plan will contain all the test plans of the iterations.
Test Suites in Test Plans are the ones which contain a set of Test Cases. T
Some examples of test plans:
In Scrum, for each iteration, one test plan is to be created.
I have an ASP.Net MVC project and I thought I could use a tool like MS Test or NUnit to perform regression testing from the controller layer down to the database, however I hit an issue where tests are not designed to run in order (You can use ordered tests in MS Test, but the tests still run concurrently) and the other problem is how to allow the data created from one test accessible to another?
I have looked at Selenium and WatiN but I just wanted to write something that is not dependent on the UI layer which is most likely going to change an increase the amount of work to maintain the tests.
Any suggestions? Is it just the wrong tool for the job? Should I just use Selenium/WatiN?
Tests should always be independent of each other, so that running order doesn't matter. If your tests depend on other tests you are losing control of what you are testing.
WatiN, and I'm assuming Selenium, won't solve your ordering problem. I use WatiN and NUnit for UI automation and the running order is not guaranteed, which initially posed similar problems to what you're seeing.
In the vein of what dskh answered, you want independent tests, and I've done this in two ways for Integration / Regression black-ish box testing.
First: In your test setup, have any precondition data values setup so you're at a known "good state". For system regression test automation, I've got a number of database scripts that get called to reset data to a known state; this adds some dependencies so be conscious of the design. Note: In straight unit testing, look at using mock objects to take out dependencies and get your test to be "testing one thing". Mock objects, stubbing method calls, etc is the way to go if you can, which based on your question sounds likely.
Second: For cases where certain things absolutely had to be setup in a certain way, and scripting them to test setup added a ridiculous amount of necessary system internal knowledge (eg: all users setup + all permissions setup + etc etc etc) a small number of "bootstrap" tests were setup, in their own namespace to allow easy running via NUnit, to bootstrap the system. Keeping the number of tests small and making sure the tests were very stable was paramount. Now, on a fresh install, the bootstrap tests are run first and serve as advanced smoke tests; no further tests are run if any of the bootstrap tests fail. It is clunky in some ways, but the alternatives were clunkier or more time/resource/whatever consuming.
Update
The link below (and I assume the project) is dead.
Best option maybe using Selenium and the Page Object Model.
See here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089
Old Answer
The simplest solution I have found is Rob Conery's Qixote:
https://github.com/robconery/Quixote
It works by firing http requests and consuming responses.
Simple to set up and use and provides integration testing. This tool also allows a series of tests to be executed in order to create test dependencies.
I am using Ruby on Rails 3.2.2 and Cucumber with the cucumber-rails gem. I would like to know what Cucumber tags are commonly used throughout an application or at least on what criteria I should think about those so to make tags "efficient"/"useful". More, I would like to know "how"/"in which ways" I "could"/"should" use Cucumber tags.
Tags are most commonly used to select or exclude certain tests from running. Your particular situation will dictate what 'groups' of tests are useful to run or not run for a particular test run, but some common examples could be:
#slow - indicates a test that takes a long time to run, you might want to exclude this from most test runs and only run it on an overnight build so that developers don't have to wait for it every time.
#wip - indicates that this test exercises unfinished functionality, so it would be expected to fail while the feature is in development (and when it's done, the #wip tag would be removed). This has special significance in Cucumber as it will return a non-zero exit code if any #wip tests actually pass
#release_x, #sprint_y, #version_z etc. Many teams tag each test with information about which release/sprint/version contains it, so that they can run a minimal suite of tests during development. Generally the same as the #wip tag except that they stay attached to the test so they always know when a particular feature was introduced.
#payments, #search, #seo etc. Basically any logical grouping of tests that isn't already expressed by the organisation of your feature files. Commonly used when a test relates to a cross-cutting concern, or when your project is divided into components along different lines to your feature files.
Tags are also used to fire hooks - bits of code which can run before, after, or 'around' tests with particular tags. Some examples of this are:
#javascript - indicates that a test needs Javascript support, so the hook can switch from an HTTP-only driver to one with JS support. Capybara goes one step further by automatically switching to a driver named after the tag, if one is found (so you could use e.g. #desktop, #mobile, #tablet drivers)
#logged_in - indicates that the test needs to run in the context of a logged-in user, this sometimes makes sense to express with a tag, although a Background section would be more commonly used
Additionally, tags can be used just for informational purposes. I've seen teams tag tests with the related issue number, author, developer, amongst other things, many of which can be useful (but many of which duplicate information which is easily found in source control, so I'd caution against that).
I think the only part I dont get is how you handle the run results. So if I set up a new project in Jira for test cases how would I make it so I can run mark a test case as pass or fail but not close out the jira.
So I basically want the original jira to be always open then be able to mark it passed or failed against a specific release. the original jira should stay unchanged just somehow log a result set?
I do not have bamboo
that make any sense
We have setup a simple custom workflow in Jira without using Confluence.
We added one new issue type - Test Case. And we have a new sub-task - Test Run.
Test Case has only three workflow actions: Pass, Fail and Invalid (the last one is to make Test Case redundant). And two statuses - Open and Invalid.
Test Run is automatically created when Test Case passes or fails. Users do not manually create test runs. We use one of the plugins to create a subtask on transition.
Test Run can be in a Passed or Failed state and has version info, user who passed or failed and a comment.
This is it.
Here are some links that I used to setup Jira for Test Case Management:
Test Case Management in Jira
Using Jira for Test Case Manangement
Create On Transition Plugin
The approach we are following is as follows
We use Confluence for implementing our test cases.
Each test case has its own page describing the setup, the scenario to run and all possible outcomes.
We have a test library page which is the parent of all these test cases.
When we want to start a validation cycle on a particular release, we use a script which
generates for each test case in confluence, a corresponding 'test run' issue.
(#DennisG - JIRA allows to define different issue types, each with its own workflow)
The summary is the summary of the testcase
The description is the scenario and outcome of the testcase
We have a specific confluence link referring the testcase
The testrun issue workflow contains 4 stages
Open
In Progress
Blocked
Closed
And 3 resolutions
Success
Failure
Review testcase
We then start validating all 'test run' isuses.
Using dashboard gadgets it is easy to see how many testcases still need to be run, how many are blocked, how many have been done, and how many have failed ...
In case the resolution is 'review testcase' we have the ability to adapt the testcase itself.
Conclusion - JIRA is certainly usable as a test execution management environment. Confluence,
as a wiki provides an environment to build the necessary hierarchies (technical, functional).
Last point.
We start to extensively use Bonfire (a plugin for JIRA)
http://www.atlassian.com/en/software/bonfire
This shortens the 'manual' testing cycle considerably.
For us it had an ROI of a couple of weeks.
Hope this helps,
Francis
PS. If you're interested to use the script send me a note.
We are using this test case management called informup.
The test case management is integrates with Jira.
In addition it has fully integration in the system so in case you want to use it as a test case management and a bug tracking system you can do it as well
you can use PractiTest, a test management tool that integrates with JIRA. PractiTest covers your entire QA process, so you can use it to create Requirements, Tests and Test sets, and use the integration option to report issues in JIRA. you can also link between the different entities.
read more about PractiTest's integration with JIRA
To be honest, I'm not sure that using JIRA (or any other bug/issue tracking tool) as a test management tool is a good idea. The problem with this is that issue trackers usually have a single main entity (the issue), whereas test management tools usually distinguish between test cases and actual tests/results. This way you can easily reuse the same test case for different releases and also store a history of test results. Additional entities such as test runs and test suites also usually make it a lot easier to manage and track your data. So instead of using Jira for test management, you might want to consider using a dedicated test management software that integrates with Jira. There are many test management tools out there, including open source projects:
http://www.opensourcetestmanagement.com/
You could also take a look at our tool TestRail, which also comes with Jira integration:
http://www.gurock.com/testrail/
http://www.gurock.com/testrail/jira-test-management.i.html
Have you tried looking in Jira's plugin directory at https://plugins.atlassian.com to see whats available to extend the core functionality. There may be something there that you could be installed.
There are tools out there that combine both issue tracking and test management (e.g. elementool.com), however if you are after a more feature rich issue tracking experience, you may need to start looking at dedicated tools.
If after looking around you find that there are no suitable solutions to enable you to have things in one place, you may want to take a look at TestLodge test case management, which is a tool I have developed that integrates easily with Jira.
Why not just integrate JIRA with a test management tool? So, for example, we use Kualitee. It integrates with JIRA and provides traceability from the defect reported to the underlying test case and requirements. So, you can run your entire QA cycle on Kualitee and sync and assign defects to your team in JIRA.
You can also generate your test case execution reports and bug reports and export them so you are all set on that front as well.