Jbehave Thucydides Test Case getting skipped - bdd

Hello i am new to jbehave and thucydides and the issue i am facing is all the steps are executed in the .java file but it only skips the #when step due to which my test gets skipped. I tried several options but it always marks when as pending when i run the test.

After executing tests case, check your console or report file for story/step error annotation.
Tests that contain no steps are considered to be pending. If one of steps (the "given-when-then" structure) gets PENDING while executing, then whole test gets labelled as SKIPPED. http://www.wakaleo.com/thucydides-one-page/thucydides.html#_defining_high_level_tests_in_junit - 6.2.1
From my experience, most pending steps ("given-when-then") are from bad spelling of a step name/title. Step from .story file and implementation file of a story (your_story.java depending on language) are different. Like "xx yy" =/= "xx yv"

Related

Can a Gherkins test be written which will result in a failure

Let me start by stating I am very wet behind the ear with gherkins and cucumber.
I've put together a PoC for my company of an integration a Jenkins projects that will build and execute tests when there is a check in a Git repository. When the tests have completed Jenkins will then update the test managed in Xray for Jira.
The tests are cucumber written using gherkins. I have in vain attempted to cause a single test to produce a failure just to be able to add that to the demo I am going to be giving to upper management.
Here is the contents of my file HelloWorld.feature:
Feature: First Hello World
#firsth #hello #XT-93
Scenario Outline: First Hello World
Given I have "<task>" task
And Step from "<scenario>" in "<file>" feature file
When I attempt to solve it
Then I surely succeed
Examples:
| task | scenario | file |
| first | First Hello | First Feature |
Currently all the tests I have pass. I have attempted to modify that test so that it would fail but thus far have only been able to get it to show in Xray as EXECUTING or TO DO.
I have searched to see if it was possible to create a test that would always result in a test failure but have not been able to find anything.
I know do not know gherkins, I'm only using what was given to me to work with, so please forgive my question.
Thank you for any guidance anyone might be able to provide.
Cucumber assumes a step passes if no exception is thrown. Causing a test to fail is easy. Just throw an exception in the step definition.
Most unit testing frameworks give you an explicit way to fail a test. You haven't mentioned the tech stack in use, but MS Test for .NET gives you Assert.Fail("reason for failure goes here.");
Or simply throw an explicit exception: throw new Exception("fail test on purpose");
As long as the step throws an exception the entire scenario should fail.

SpecRunner : Test Method getting called twice

I am using Spec Runner to run my test cases, the scenrios are getting called twice.
What might be the issue?
Please find the below scenario and test results attached
ScenarioTestResults
Are they failing scenarios? In the standard configuration SpecFlow+Runner retries failing tests.
To disable the retry of a scenario, you have to set the retryCount parameter in the execution element to 0. See http://www.specflow.org/plus/documentation/SpecFlowPlus-Runner-Profiles/#Execution
Full disclosure: I am one of the developers of the SpecFlow+Runner.

Junit test failing in a Jenkins Gradle build but not locally

I have a weird situation with Jenkins... We've just started using Gradle for a project at my job and when I run the tests locally with JUnit everything is fine. But when these tests are run by jenkins for the builds of branch "A", only one test fails because of an assert(always the same test).
org.junit.ComparisonFailure: expected: "E[ZZ0530]Z" but was:"E[SY5654]Z"
It looks like the mock is not injected or the mock is ignoring the "when" mocking statement.
Here is the test :
#Test
public void testEvent() {
Date eventDateTime = TimeUtils.parseDate("2013-05-30 00:00:00");
event.setEventDatetime(eventDateTime);
//Mocking the prefix return
Mockito.when(eventCodeHelperMock.getEventCodePrefixFromEvent(event)).thenReturn("EZZ");
//Tested methode
eventWrapper.setSuffix("Z");
// Event code = prefix + date + suffix
assertEquals("EZZ0530Z", event.getEventCode());
}
What is a lot stranger is that when I create a branch "B" from branch "A" all the tests succeeds when the build is created on jenkins.
I've made some research and tried to force an other build, wipe out the current workspace and recreating the job but it didn't work.
Thanks for your help!
I have had similar problems in the past and it has been due to the order in which the junits tests are run. For example, one test modifies the state of an object but you dont see the effects of this till the tests run in a different order, and tests unexpectedly fail. There is not sufficient code in the question you have posted to tell whether this is definitely the case, but I would recommend checking the order in which the tests are being run, and also look at the objects that you are using to determine if there is a problem with the state of those objects being 'dirtied'.

Sample TFS 2010 Build Process Template for NCover [duplicate]

I was wondering if any of you guys had any experience generating code coverage reports in TFS Build Server 2010 while running NUnit tests.
I know it can be easily done with the packaged alternative (MSTest + enabling coverage on the testrunconfig file), but things are a little more involved when using NUnit. I've found some info here and there pointing to NCover, but it seems outdated. I wonder if there are other alternatives and whether someone has actually implemented this or not.
Here's more info about our environment/needs:
- TFS Build Server 2010
- Tests are in plain class libraries (not Test libraries - i.e., no testrunconfig files associated), and are implemented in NUnit. We have no MSTests.
- We are interested in running coverage reports as part of each build and if possible setting coverage threshold requirements for pass/fail criteria.
We 've done it with NUnit-NCover and are pretty happy with our results. NUnit execution is followed by NUnitTfs execution in order to get our testing results published in the Build Log. Then NCover kicks in, generating our code coverage results.
One major thing that poses as a disadvantage is fact that setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it.
Two things could pose as disadvantages:
NUnitTfs doesn't work well with NCover (at least I couldn't find a way to execute both in the same step, so (since NCover invokes NUnit) I have to run Unit tests twice: (1) to get the test results and (2) to get coverage results over NCover. Naturally, that makes my builds last longer.
Setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it .
In any case, the resulting reporting (especially the Trend aspect) is very useful in monitoring how our code evolves within time. Especially if you 're working on a Platform (as opposed to short-timed Projects), Trend reports are of great value.
EDIT
I 'll try to present in a quick & dirty manner how I 've implemented this, I hope it can be useful. We currently have NCover 3.4.12 on our build server.
Our simple naming convention regarding our NUnit assemblies is that if we have a production assembly "123.dll", then another assembly named "123_nunit.dll" exists that implements its tests. So, each build has several *_nunit.dll assemblies that are of interest.
The part in the build process template under "If not disable tests" is the one that has been reworked in order to achieve our goals, in particular the section that was named "Run MSTest for Test Assemblies". The whole implementation is here, after some cleanups to make the flow easier to be understood (pic was too large to be directly inserted here).
At first, some additional Arguments are implemented in the Build Process Template & are then available to be set in each build definition:
We then form the NUnit args in "Formulate nunitCommandLine":
String.Format("{0} /xml={1}\\{2}.xml", nunitDLL, TestResultsDirectory, Path.GetFileNameWithoutExtension(nunitDLL))
This is then used in the "Invoke NUnit"
In case this succeeds & we have set coverage for this build we move to "Generate NCover NCCOV" (the coverage file for this particular assembly). For this we invoke NCover.Console.exe with the following as Args:
String.Format("""{0}"" ""{1}"" //w ""{2}"" //x ""{3}\{4}"" //literal //ias {5} //onlywithsource //p ""{6}""",
NUnitPath,
Path.GetFileName(nunitDLL),
Path.GetDirectoryName(nunitDLL),
Path.GetDirectoryName(Path.GetDirectoryName(nunitDLL)),
Path.GetFileName(nunitDLL).Replace("_nunit.dll", ".nccov"),
Path.GetFileNameWithoutExtension(nunitDLL).Replace("_nunit", ""),
BuildDetail.BuildNumber)
All these run in the foreach loop "For all nunit dlls". When we exit the loop, we enter "Final NCover Activities" & at first the part "Merge NCCovs", where NCover.Console.exe is executed again - this time with different args:
String.Format("""{0}\*.nccov"" //s ""{0}\{1}.nccov"" //at ""{2}\{3}\{3}.trend"" //p {1} ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject
)
When this has run, we have reached the point where all NCCOV files of this build are merged into one NCCOV-file named after the build + the Trend file (that monitors the build throughout its life) has been updated with the elements of this current build.
We now have to only generate the final HTML report, this is done in "Generate final NCover rep" where we invoke NCover.reporting with the following args:
String.Format(" ""{0}\{1}.nccov"" //or FullCoverageReport //op ""{2}\{1}_NCoverReport.html"" //p ""{1}"" //at ""{3}\{4}\{4}_{5}.trend"" ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
PathForNCoverResults,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject,
BuildType
)

How do I "map" certain return value of a script to "yellow" status in Jenkins?

In Jenkins there is a possibility to create free project which can contain a script execution. The build fails (becomes red) when the return level of the script is not 0.
Is there a possibility to make it "yellow"?
(Yellow usually indicates successful build with failed tests)
The system runs on Linux.
Give the Log Parser Plugin a try. That should do the trick for you.
One slightly hacky way do do it, is to alter the job to publish test results and supply fake results.
I've got a job that is publishing the test results from a file called "results.xml". The last step in my build script checks the return value of the build, copies eihter "results-good.xml" or "results-unstable.xml" to "results.xml" and then returns a zero.
Thus if the script fails on one of the early steps, the build is red. But if the build succeeds its green or yellow based on the return code it would have retunred without this hack.

Resources