When I am running my test solution, single browser is getting launched but it is running two feature file simultaneously due to which test cases are failing. One step it is taking from one feature file and other from other feature file.
Contrary to the comment left on your question, I think I may have enough context to answer you.
You describe feature files that are sharing steps and concerns about multiple browser instances. This tells me that your various step files might each be containing a browser instance.
What you're likely looking to do instead is to use a SpecFlow Context -- SpecFlow provides it own ScenarioContext object you can use, or you can create your own context and inject it.
Some links that might help:
SpecFlow docs on sharing data between bindings, which explains about ScenarioContext and FeatureContext:
https://docs.specflow.org/projects/specflow/en/latest/Bindings/Sharing-Data-between-Bindings.html
Here's an article on using SpecFlow with Selenium and the Page Object Model: https://docs.specflow.org/projects/specflow/en/latest/ui-automation/Selenium-with-Page-Object-Pattern.html
The SpecFlow YouTube channel will likely be helpful as it's full of experts walking through these sort of examples: https://www.youtube.com/c/SpecFlowBDD/videos
Here's the first video in a 5 part series on how to automate a web application with Selenium and SpecFlow: https://www.youtube.com/watch?v=y1dAogvWVh8
Based on your question, it's also possible that your issue could be that you want things to run in parallel, but have the problem of your tests being dependent on one another or running in a certain order. This will be a bit more complex to solve.
I strongly suggest you treat your tests so that they can be run in isolation. You may need to add separate data to a database, or operate your tests so that they're not touching the same thing. This takes more work, but is more than worth it, because it will enable better maintainability and reliability of your tests and also ensure they can run in parallel successfully.
I hope this helps!
Related
I have started recently using SpecFlow and I have 2 basic questions I need to clarify, also to confirm I am on the right way:
As I understand, it is a must that all the input data (test parameters for the scenarios) to be provided by the tester, the same about the test data (input data for the tables involved in the test scenarios)
Are there any existing tools for a quick way of generating test data (inserting it into the DB) ? I am using Entity Framework as part of the Data access layer. I was wondering about some tool that would read the data from a file or probably some Desktop application to provide values for the table's fields (which could also then generate a file from which some other tool could read all the data and generate all the required objects etc).
I also had a look at Preparing data for a SpecFlow scenario - I was thinking if there is already a framework which would achieve insert\delete of test data to use alongside with SpecFlow.
I don't think you are on the right track. SpecFlow is a BDD tool, but in some ways it only covers part of the process. Have a read of http://lizkeogh.com/2013/07/01/behavior-driven-development-shallow-and-deep/ and see if any if the scenarios sound familiar?
To move forwards I would recommend you start with http://dannorth.net/introducing-bdd/ to get a good idea of how it all began. Now lets consider your points;
The tester provides all the test data. Well yes and no. The idea is that between yourself and the feature expert, you are able to have a conversation that provides all the examples that you need to develop your feature. If you don't involve yourself in that conversation, then yes all the data will come from the other side, but the chances are it won't be such high quality as if you are able to ask the right questions and guide the conversation so the data follows a structure that you can code tests too.
As an example here, when I first started with BDD I thought I could get the business experts to write the plain text scenario files with less input from the development, but in practice the documents tended to be less useful than when we were involved. Not because they couldn't write decent specifications, but actually because they couldn't refactor them to reuse bindings etc. We were still needed to add our skills into the process.
Why does data go into a database? A good test is isolated to the scope that it is testing. For a UI layer test this means that we don't have a database. For a business tier test we shouldn't be reliant on the database to get data either.
In practice a database is one of the most difficult things to include in your testing because once any part of the data changes you cause cascading test failures.
Instead I would recommend making your features smaller and provide the data for your test in the scenario or binding. This also makes having your conversation easier, because the fiftieth row of test pack is not something either party is going to remember. ;-) I recommend instead trying to give you data identities, so "bob" might be individual in a test you can discuss, and both sides understand what makes him an interesting example.
good luck :-)
Update: With regard to using a database during testing, my experience is that there are a lot of complexities that make it a difficult choice to work with. Consider these points,
How will you reset the state of your data between tests?
How will you reset the state if one / some tests fail?
If you are using branches or even just if two developers are making changes at the same time, how will you support multiple test datasets?
How will you handle two instances of the tests running at the same time (don't forget the build server)?
Have a look at this question SpecFlow Integration Testing with Database Patterns which includes some patterns that you can use.
I have an ASP.Net MVC project and I thought I could use a tool like MS Test or NUnit to perform regression testing from the controller layer down to the database, however I hit an issue where tests are not designed to run in order (You can use ordered tests in MS Test, but the tests still run concurrently) and the other problem is how to allow the data created from one test accessible to another?
I have looked at Selenium and WatiN but I just wanted to write something that is not dependent on the UI layer which is most likely going to change an increase the amount of work to maintain the tests.
Any suggestions? Is it just the wrong tool for the job? Should I just use Selenium/WatiN?
Tests should always be independent of each other, so that running order doesn't matter. If your tests depend on other tests you are losing control of what you are testing.
WatiN, and I'm assuming Selenium, won't solve your ordering problem. I use WatiN and NUnit for UI automation and the running order is not guaranteed, which initially posed similar problems to what you're seeing.
In the vein of what dskh answered, you want independent tests, and I've done this in two ways for Integration / Regression black-ish box testing.
First: In your test setup, have any precondition data values setup so you're at a known "good state". For system regression test automation, I've got a number of database scripts that get called to reset data to a known state; this adds some dependencies so be conscious of the design. Note: In straight unit testing, look at using mock objects to take out dependencies and get your test to be "testing one thing". Mock objects, stubbing method calls, etc is the way to go if you can, which based on your question sounds likely.
Second: For cases where certain things absolutely had to be setup in a certain way, and scripting them to test setup added a ridiculous amount of necessary system internal knowledge (eg: all users setup + all permissions setup + etc etc etc) a small number of "bootstrap" tests were setup, in their own namespace to allow easy running via NUnit, to bootstrap the system. Keeping the number of tests small and making sure the tests were very stable was paramount. Now, on a fresh install, the bootstrap tests are run first and serve as advanced smoke tests; no further tests are run if any of the bootstrap tests fail. It is clunky in some ways, but the alternatives were clunkier or more time/resource/whatever consuming.
Update
The link below (and I assume the project) is dead.
Best option maybe using Selenium and the Page Object Model.
See here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089
Old Answer
The simplest solution I have found is Rob Conery's Qixote:
https://github.com/robconery/Quixote
It works by firing http requests and consuming responses.
Simple to set up and use and provides integration testing. This tool also allows a series of tests to be executed in order to create test dependencies.
I think the only part I dont get is how you handle the run results. So if I set up a new project in Jira for test cases how would I make it so I can run mark a test case as pass or fail but not close out the jira.
So I basically want the original jira to be always open then be able to mark it passed or failed against a specific release. the original jira should stay unchanged just somehow log a result set?
I do not have bamboo
that make any sense
We have setup a simple custom workflow in Jira without using Confluence.
We added one new issue type - Test Case. And we have a new sub-task - Test Run.
Test Case has only three workflow actions: Pass, Fail and Invalid (the last one is to make Test Case redundant). And two statuses - Open and Invalid.
Test Run is automatically created when Test Case passes or fails. Users do not manually create test runs. We use one of the plugins to create a subtask on transition.
Test Run can be in a Passed or Failed state and has version info, user who passed or failed and a comment.
This is it.
Here are some links that I used to setup Jira for Test Case Management:
Test Case Management in Jira
Using Jira for Test Case Manangement
Create On Transition Plugin
The approach we are following is as follows
We use Confluence for implementing our test cases.
Each test case has its own page describing the setup, the scenario to run and all possible outcomes.
We have a test library page which is the parent of all these test cases.
When we want to start a validation cycle on a particular release, we use a script which
generates for each test case in confluence, a corresponding 'test run' issue.
(#DennisG - JIRA allows to define different issue types, each with its own workflow)
The summary is the summary of the testcase
The description is the scenario and outcome of the testcase
We have a specific confluence link referring the testcase
The testrun issue workflow contains 4 stages
Open
In Progress
Blocked
Closed
And 3 resolutions
Success
Failure
Review testcase
We then start validating all 'test run' isuses.
Using dashboard gadgets it is easy to see how many testcases still need to be run, how many are blocked, how many have been done, and how many have failed ...
In case the resolution is 'review testcase' we have the ability to adapt the testcase itself.
Conclusion - JIRA is certainly usable as a test execution management environment. Confluence,
as a wiki provides an environment to build the necessary hierarchies (technical, functional).
Last point.
We start to extensively use Bonfire (a plugin for JIRA)
http://www.atlassian.com/en/software/bonfire
This shortens the 'manual' testing cycle considerably.
For us it had an ROI of a couple of weeks.
Hope this helps,
Francis
PS. If you're interested to use the script send me a note.
We are using this test case management called informup.
The test case management is integrates with Jira.
In addition it has fully integration in the system so in case you want to use it as a test case management and a bug tracking system you can do it as well
you can use PractiTest, a test management tool that integrates with JIRA. PractiTest covers your entire QA process, so you can use it to create Requirements, Tests and Test sets, and use the integration option to report issues in JIRA. you can also link between the different entities.
read more about PractiTest's integration with JIRA
To be honest, I'm not sure that using JIRA (or any other bug/issue tracking tool) as a test management tool is a good idea. The problem with this is that issue trackers usually have a single main entity (the issue), whereas test management tools usually distinguish between test cases and actual tests/results. This way you can easily reuse the same test case for different releases and also store a history of test results. Additional entities such as test runs and test suites also usually make it a lot easier to manage and track your data. So instead of using Jira for test management, you might want to consider using a dedicated test management software that integrates with Jira. There are many test management tools out there, including open source projects:
http://www.opensourcetestmanagement.com/
You could also take a look at our tool TestRail, which also comes with Jira integration:
http://www.gurock.com/testrail/
http://www.gurock.com/testrail/jira-test-management.i.html
Have you tried looking in Jira's plugin directory at https://plugins.atlassian.com to see whats available to extend the core functionality. There may be something there that you could be installed.
There are tools out there that combine both issue tracking and test management (e.g. elementool.com), however if you are after a more feature rich issue tracking experience, you may need to start looking at dedicated tools.
If after looking around you find that there are no suitable solutions to enable you to have things in one place, you may want to take a look at TestLodge test case management, which is a tool I have developed that integrates easily with Jira.
Why not just integrate JIRA with a test management tool? So, for example, we use Kualitee. It integrates with JIRA and provides traceability from the defect reported to the underlying test case and requirements. So, you can run your entire QA cycle on Kualitee and sync and assign defects to your team in JIRA.
You can also generate your test case execution reports and bug reports and export them so you are all set on that front as well.
We have utilized Specflow and WatIn for acceptance tests at my current project. The customer wants us to use Microsoft coded-ui instead. I have never tested coded ui, but from what I've seen so far it looks cumbersome. I want to specify my acceptance tests up front, before I have a ui, not as a result of some record/playback stuff. Anyway, can someone please tell me why we should throw away the Specflow/watin combo and replace it with coded ui?
I've also read that you can combine specflow with coded ui, but it looks like a lot of overhead for something which I am already doing fine in specflow.
I wrote a blog post on how to do this you might find useful
http://rburnham.wordpress.com/2011/03/15/bdd-ui-automation-with-specflow-and-coded-ui-tests/
The pro's and con's of Coded UI Test that i can think of is your testing the application exactly how the user will be using it. This is good for acceptance test but it also has its limitations. Its also really good for end to end testing. In the past UI Tests have been know to be fragile. For example when MS created the VS2010 UI almost all of the UI tests broke. The main reason being is the technology change. Coded UI tests do help to limit this from happening by the way it matches a control. It uses more of a probability based match. This mean it will try to find the best match based on the information it has such as control name. For us Coded UI tests was our choice because of technology limitations. Our Legacy app is VB and although CUIT does not work great, i'm in the progress of writing an extension to get better control information, it was still our only choice. Also keep in mind CUIT is new and has its own limitations. You should be prepared to be very structured in the way you lay out your project as maintaining your UIMaps can be a bit of manual work due to the current end to end behaviour in VS2010, for example creating a CUIT from an existing action recording always places the test in a UIMap called UIMap.uitest and there is no way to change that or transfer to another UIMap. If you use multiple ui maps this means you will need to record your steps first and then use them in your test. However being in .net it its still very flexible.
By far the best thing about specflow is its gerkin syntax for readability and living documentation. Normally your testing features or behaviours of your app which is where the value comes from It generally aims the test just below the UI. There is a little less chance of the test breaking when the UI changes here but there. Specflow to me is great when your application is under constant change and you want to ensure existing features remain working. It fits well in a Scrum environment as well where you can write your scenario's as a description about how it should work. One limitation to specflow i can see is its open for interpretation. Because of this it can be easy to write a test that is not very reusable and hard to maintain. I like to use more generic terms to describe my steps like "Log in as User1" instead of "Go to Login Page, Enter Username and Password, Click login". Describing it more granular makes it harder to reuse tightly couples it to the UI. How the login actually work should be up to the code behind not the specflow feature.
Combining the 2 however to us seems more beneficial than just using Coded UI Tests. If we decide to completely change the UI we would at least have the behaviours that are expected stored in our specflow features in a way anyone can understand. In the end you need to consider how the application will evolve and the type of application.
I'm using SpecFlow to do some BDD-style testing. Some of my features are UI tests, so they use WatiN. Some aren't UI tests, so they don't.
At the moment, I have a single StepDefinitions.cs file, covering all of my features. I have a BeforeScenario step that initializes WatiN. This means that all of my tests start up Internet Explorer, whether they need it or not.
Is there any way in SpecFlow to have a particular feature file associated with a particular set of step definitions? Or am I approaching this from the wrong angle?
There is a simple solution to your problem if you use tags.
First tag you feature file to indicate that a particular feature needs WatiN like that:
Feature: Save Proportion Of Sample Pool Required
As an <User>
I want to <Configure size of the Sample required>
so that <I can advise the deployment team of resourcing requirments>.
#WatiN
Scenario: Save valid sample size mid range
Given the user enters 10 as sample size
When the user selects save
Then the value is stored
And then decorate the BeforeScenario binding with an attribute that indicates the tag:
[BeforeScenario("WatiN")]
public void BeforeScenario()
{
...
}
This BeforeScenario method will then only be called for the features that use WatiN.
Currently (in SpecFlow 1.3) step-definitions are global and cannot be scoped to particular features.
This is by design to have the same behavior as Cucumber.
I asked the same question on the cucumber group:
http://groups.google.com/group/cukes/browse_thread/thread/20cd7e1db0a4bdaf/fd668f7346984df9#fd668f7346984df9
The baseline is, that the language defined by all the feature files should also be global (one global behavior of the whole application). Therefore scoping definitions to features should be avoided. Personally I am not yet fully convinced about this ...
However your problem with starting WatiN only for scenarios that need UI-Integration can be solved in two different ways:
Tags and tagged hooks: You can tag your scenarios (i.e with #web) and define ina BeforeScenario-Hook that only should run for scenarios with a certain tag (i.e. [BeforeScenario("web")]). See the Selenium integration in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/blob/master/ASP.NET-MVC/BookShop/BookShop.AcceptanceTests.Selenium/Support/SeleniumSupport.cs
We often completely separate scenarios that are bound to the UI and scenarios that are bound to a programmatic API (i.e controller, view-model ...) into different projects. We tried to illustrate this in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/tree/master/ASP.NET-MVC/BookShop/ .
Check this out (new feature in SpecFlow 1.4): https://github.com/techtalk/SpecFlow/wiki/Scoped-Bindings
I originally assumed that a step file was associated with a particular feature file. Once I realized this was not true it helped me to improve all my SpecFlow code and feature files. The language of my feature files is now less context depended, which has resulted in more reusable step definitions and less code duplication. Now I organize my step files according to general similarities and not according to which feature they are for. As far as I know there is no way to associate a step with a particular feature, but I am not a SpecFlow expert so don't take my word for it.
If you still would like to associate your step files with a particular feature file, just give them similar names. There is no need for it to be forced to only work for that feature even if the step code only makes sense for that feature. This is because even if you happen to create a duplicate step for a different feature, it will detect this as an ambiguous match. The behavior for ambiguous matches can be specified in an App.config file. See
http://cloud.github.com/downloads/techtalk/SpecFlow/SpecFlow%20Guide.pdf
for more details the about App.config file. By default ambiguous matches are detected and reported as an error.
[edit]:
Actually there is a problem with working this way (having step files associated with feature files in your mind only). The problem comes when you add or modify a .feature file and use the same wording you have used before, and you forget to add a step for it, but you don't notice this because you already created a step for that wording once before, and it was written in a context sensitive manner. Also I am no longer convinced of the usefulness of not associating step files with feature files. I don't think most clients would be very good at writing the specification in a context independent manner. That is not how we normally write or talk or think.
Solution for this is to implement Tags & Scoped Binding with the test scenario which is related to Web or related to Controller / Core logic in code.
And drill down the scope for each scenario to any of the below mentioned Before / After execution
BeforeTestRunScenario
BeforeFeature
BeforeScenario
BeforeScenarioBlock
BeforeStep
AfterStep
AfterScenarioBlock
AfterScenario
AfterFeature
AfterTestRunScenario
Also consider using implementation-agnostic DSL along with implementation-specific step definitions. For example, use
When I search for 'Barbados'
instead of
`When I type 'Barbados' in the search field and push the Search button
By implementing multiple step definition assemblies, the same Scenario can execute through different interfaces. We use this approach to test UI's, API's, etc. using the same Scenario.