How to avoid ignored tests being generated as in SpecFlow MsTest? - specflow

Here is the Sample feature file, has #ignore examples.
#ChildTest
Scenario: Sub Report
Given I have clicked on EmpId: '<EmpId>' to view Report
When Loading mask is hidden
Then I have clicked on 'Back to Results' link.
#ignore
Examples:
| EmpId | Date |
| CHILD_TEST_SKIPPED | dynamic |
I would like TestGenerator to AVOID Unit test method generation for #ignore examples

You cannot get the test generator to ignore those tests. SpecFlow tags become [Test category("ignore")] attributes above the test methods that get generated.
You will need to filter out those tests in Test Explorer. Enter -trait:ignore in the Test Explorer search bar to exclude those tests.
An alternative is to set the test to "pending":
Scenario: ...
Given pending
And in the step definition for Given pending call: Assert.Inconclusive("This test is temporarily disabled.");
Then the tests get executed, but report that they are neither passing nor failing. I do this quite a bit when implementing new features so I can write the tests ahead of time.

Related

Can a Gherkins test be written which will result in a failure

Let me start by stating I am very wet behind the ear with gherkins and cucumber.
I've put together a PoC for my company of an integration a Jenkins projects that will build and execute tests when there is a check in a Git repository. When the tests have completed Jenkins will then update the test managed in Xray for Jira.
The tests are cucumber written using gherkins. I have in vain attempted to cause a single test to produce a failure just to be able to add that to the demo I am going to be giving to upper management.
Here is the contents of my file HelloWorld.feature:
Feature: First Hello World
#firsth #hello #XT-93
Scenario Outline: First Hello World
Given I have "<task>" task
And Step from "<scenario>" in "<file>" feature file
When I attempt to solve it
Then I surely succeed
Examples:
| task | scenario | file |
| first | First Hello | First Feature |
Currently all the tests I have pass. I have attempted to modify that test so that it would fail but thus far have only been able to get it to show in Xray as EXECUTING or TO DO.
I have searched to see if it was possible to create a test that would always result in a test failure but have not been able to find anything.
I know do not know gherkins, I'm only using what was given to me to work with, so please forgive my question.
Thank you for any guidance anyone might be able to provide.
Cucumber assumes a step passes if no exception is thrown. Causing a test to fail is easy. Just throw an exception in the step definition.
Most unit testing frameworks give you an explicit way to fail a test. You haven't mentioned the tech stack in use, but MS Test for .NET gives you Assert.Fail("reason for failure goes here.");
Or simply throw an explicit exception: throw new Exception("fail test on purpose");
As long as the step throws an exception the entire scenario should fail.

Missing Result tab in Microsoft Test Manager

I'm looking for the 'Result' tab mentionned in the MSDN
On Test Manager 2013, I set a test plan, and performed my tests. I now want to share the results.
Basically, I should have:
Plan =>
Contents | Results | Properties
But I only have:
Plan =>
Contents | Properties
Is there a TFS configuration for this? Any idea on what's wrong?
Here the version I run:

TFS Build log shows that the Test Impact match pattern is set to "/build/*.*/*.dll" not '*.exe"

I started a new project in VS 2013 and TFS 2013. After doing a little coding I ran some tests and created test cases in the Test Manager. The tests pass and the Test result files show the impact XML files attached. However, all the subsequent builds show no Tests Impacted. After checking the build logs I see that the Test Impact entry has a Match pattern that ends in:
"\bin***.dll", but the application is a windows form app.
Is there something I have missed in setting up the project that would cause this?
This is the output log section:
...
Run VS Test Runner00:00:00
There were no matches for the search pattern ...\bin\**\*test*.dll
There were no matches for the search pattern ...\bin\**\*test*.appx
Run optional script after Test Runner00:00:00
Inputs
EnvironmentVariables:
Enabled: True
Arguments:
FilePath:
Outputs
Result: 0
Get Impacted Tests00:00:00
There were no matches for the search pattern ...\bin\**\*.dll
A baseline build could not be located. Test impact analysis will not be
performed for this build.
Publish Symbols
...
I found the issue. For the Test Impact Analysis the build must have a drop location: The "copy output to the server" option does not count as a "drop location"!

Demonstration using Spock

I'm going to be doing a presentation on Spock next week and as part of the presentation I need to give a demonstration. I have used Spock a bit before for a project but haven't used it in about a year or so.
The demonstration needs to be more than just a "hello world" type demonstration. I'm looking for ideas of cool things I can demonstrate using Spock ... Any ideas?
The only thing I have now is the basic example that is included in the "getting started" section of the Spock website.
def "length of Spock's and his friends' names"() {
expect:
name.size() == length
where:
name << ["Kirk", "Spock", "Scotty"]
length << [4,5,6]
/*
name | length
"Spock" | 5
"Kirk" | 4
"Scotty" | 6
*/
}
Same tool for end-to-end testing as well as unit testing. Since it is based on groovy you can provide your own simple domain specific dsl based automation framework leveraging spock. I've around 5000 automated tests running as part of CI using this framework.
For Acceptance Testing
use of power asserts focus on how easy it is to interpret the failed assertions
BDD with given-when-then
data driven specs and unrolling
business friendly reporting
Powerful UI automation by marrying with Geb
For unit and integration testing
interaction based testing and mocking
simplified xml etc testing because of groovy goodies
Get more ideas from their documentation

How do you counter BDD-scripting anti-pattern in Specflow?

This is an example of one of our acceptance tests:
Feature: Add an effect to a level
In order to create a configuration
As a user
I want to be able to add effects to a level
Scenario: Add a new valve effect to a level
Given I have created a level named LEVEL123 with description fooDescription
And I am on the configuration page
When I click LEVEL123 in the level tree
And I expand the panel named Editor Panel
And I click the Add Valve Effect button
And the popup named ASRAddVal appears
And I click the Add new button
And I fill in these vertical fields
| field name | value |
| Name | Effect123 |
Then I should see the following texts on the screen
| text |
| Effect added : EFFECT123 |
We feel that this is getting a bit to verbose and we want to hear how you reduce steps in Specflow. From what I've read so far, creating specific non-reusable steps is not recommended, so what is considered "best practice" when doing this in SpecFlow?
Update:
What I am trying to say is that I've learned that you should try to create generic steps in order to re-use them across multiple tests. One way to do that is to parametrize your steps, for example: "Given I have created a level named ..", but the parameterization also introduces verbosity. I want to end up with something like Bryan Oakley suggests in his answer, but I just can't see how I can do that without creating steps which are very specific to each tests. This again means that I'll end up with a lot of steps which reduces maintainability. I looks like SpecFlow has some way of defining abbreviating steps by creating a file which inherits a base class called "Steps", but this still introduces new steps.
So to summarize things; show me a good approach for ending up with Bryan Oakleys answer which is maintainable.
I would simplify it to something like this:
Scenario: Add a new valve effect to a level
Given I have created a new level
When I add a new valve effect with the following values
| field name | value |
| Name | Effect123 |
Then I should get an on-screen confirmation that says "Effect added: Effect123"
The way I approached the problem was to imagine that you are completely redesigning the user interface. Would the test still be usable? For example, the test should work even if there is no "Add" button in the redesign, or you no longer user a popup window.
You could try wording them generically and use parameters.
Given i have create a new: Level
the ':' is only so you can identify the parameter. This means you would have one generic entry point for a step that needs to create a new something. Its up to the step then to look at the parameter of Level and create a new Level
Also try to come up with a naming conversion everyone can use. It should be easy to discover what steps have already been created so you don't get duplicate similar steps.
Can I suggest that maybe the code you are testing should go into unit tests. Maybe what you mean by "test specific" are individual unit tests that are not covered by your acceptance tests.
Just a thought :)

Resources