Is there any way not to report re-run failed test case (in extent report) as part of <rerunFailingTestsCount> - extent

Environment - Cucumber V. 4.0.0 | Selenium V.3.8.1 | JUnit V.4.12 | Extent Report 3.0/4.0 (anyone)
Using below Surefire configuration to re-run failed test case. In case, test case gets failed then it would have one more attempt and lets say, test case got passed in 2nd attempt.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${maven-surefire.plugin.version}</version>
<configuration>
<parallel>methods</parallel>
<threadCount>1</threadCount>
<reuserForks>false</reuserForks>
<testErrorIgnore>true</testErrorIgnore>
<testFailureIgnore>true</testFailureIgnore>
<includes>
<include>**/*RunCukeTest.java</include>
</includes>
<rerunFailingTestsCount>1</rerunFailingTestsCount>
</configuration>
</plugin>
Once overall build is completed and If I check Cucumber-Maven & Cluecumber Report, then these would have details of only Passed attempt not of 1st attempt in which test case was failed. Which sounds perfect. But,
When i check extent report there i am getting details of both attempts (failed & passed).
Can someone guide me on the below 2 implementations -
1. What thought i shall bring to report only pass test case (in extent report) after launching re-run failed as part of
(With or without adapter, any one would work), do not want to report failed attempt if test case got passed at nth attempt.
2. Sometimes we would need to do analysis why few test cases are getting passed in 2nd/3rd attempt. So is there any way to report failed test in some separate report after launching re-run failed as part of
Any thought would be much appreciated as it would bring innovation in report and we get the best report in terms of failed/passed test case analysis when doing re-run of failed test cases.

From what I remember, to re-run a test, TestNG initializes execution from scratch which creates a new suite and therefore a brand new report (overrides the existing with new data).
In the current scheme of things, this is not possible but there is an open ticket for this enhancement: github.com/extent-framework/extentreports-java/issues/25

As per my best knowledge, it was possible using #extendedcucumberoptions in 1.2.5, there one was able to generate a separate report for failed test cases but not possible from 4.0.0 onwards.

Related

Jenkins - Test Results Analyzer - Is there a way to show the status of the Recent run instead of a grouped result?

Jenkins - Test Results Analyzer - Every recent test shows as FAIL even though it passed because the previous test run failed, Is there a way to show the latest test result? For example, I have a WDIO test that has 4 test cases - Test Case 4 in my Run number 1. Now, when I run again - say Run 2, the status for the entire script shows as 'FAILED' even though all the 4 test cases under it passed. So, is there a way we can show the latest/specific run status in the High level status? As this high level status says fail, even though all the scripts under it passed, does not give me a clear picture. Thanks and appreciate any help.
I tried the options in the Tools - Test results analyzer and still the same. I tried searching online too but not much anywhere on this issue. Thanks.

Structuring configuration for apps in Common Test suites

I have hit an issue with Common Test and the way I specify the configuration for the apps I test. I have several collections of test suites where each collection of test suites has a ct_hook module to set up some things.
The way I configure apps I’m going to test is to call application:load/1 and then application:set_env/3 before I call application:ensure_all_started/1.
For individual (collections) of test suites, this works well. however, when I run rebar3 ct, it (naturally) runs multiple test suites in succession, and if I need to configure an app that I’m going to use in a later run, it is then too late to call application:set_env/3 if that app was already loaded indirectly (as a dependency – or even a dependency’s dependency) in an earlier suite’s ct_hook:
init/2 in first_ct_hook:
% loads app_a, but also its dependency app_b and *app_b's* dependency app_z:
application:load(app_a),
application:set_env(app_a, database, my_db_config),
​
% …
% great success!
​
init/2 in second_ct_hook:
application:load(app_b), % loads app_b (its dependency app_z is already loaded)
application:set_env(app_a, database, my_db_config),
application:set_env(app_z, important, my_important_config), % oh no! too late!
What’s the proper way to do this?

Running Geb + spock tests headless

I have a number of geb functional tests for a grails application.
The tests are working as expected when executed from terminal or IDE.
Although the tests need to be executed by hudson, so they are run in headless mode using Xvfb.
The problem is that the tests keep failing, or behaving unexpectedly, returning errors like RequiredPageContentNotPresent and Stale Element Reference Exception in places that doesn't make sense.
For example:
(at LicencePage is verified above, and page isn't changed)
when:
addDocument(Data.Test_Doc_name,Data.Test_Doc_file)
sometimes throws
Failure: Add Actual Licence (HomePageSpec)
| geb.error.RequiredPageContentNotPresent: The required page content 'addDocument - SimplePageContent (owner: LicencePage, args: [Functional Test Doc, /var/lib/hudson/jobs/KB-Functional_Tests/workspace/app/../manual_test_data/so_v3/os_test_1], value: null)' is not present
at geb.content.TemplateDerivedPageContent.require(TemplateDerivedPageContent.groovy:61)
at geb.content.PageContentTemplate.create_closure1(PageContentTemplate.groovy:63)
at geb.content.PageContentTemplate.create(PageContentTemplate.groovy:82)
at geb.content.PageContentTemplate.get(PageContentTemplate.groovy:54)
at geb.content.NavigableSupport.getContent(NavigableSupport.groovy:45)
at geb.content.NavigableSupport.methodMissing(NavigableSupport.groovy:121)
at geb.Browser.methodMissing(Browser.groovy:194)
at geb.spock.GebSpec.methodMissing(GebSpec.groovy:51)
at HomePageSpec.Add Actual Licence (HomePageSpec.groovy:228)
The method addDocument() is defined on an 'abstract' page, which LicencePage is extending. In most cases like this, if I copy the method code directly into my Spec, it is going to work, although its ruining all the structure I have on my test pages.
Anyone has experience running geb tests with Xvfb? Have you faced these issues?
All tests are passing when executed locally, and this not a data issue as the DB is always cleared
Also, without making any changes, the tests are behaving non-deterministic (on hudson) so the above exception is not always thrown. Without any changes at all, tests are sometimes successful and sometimes fail.
The description you gave seems to be the symptom of a flackey test-suite. we were facing this problem as well some time ago. A good starting point for this is this presentation (around min. 35) and the documentation about the wait stuff in geb.
If you think, it could have something to do with xvfb (where i have no experiences with), you could try to use phantomjs as the test-runner and check if it works correctly.

Cobertura not analysing the project in grails

I'm using grails 1.3.7.I installed code-coverage plugin.And placed the cobertura 1.9.4.jar and asm2.2.3.jar in project lib folder in STS workspace.Running using grails test-app -coverage command.When i'm doing this it runs the whole Junit test casses as well it generate the report in Cobertura folder in target.But in generated HTML code it showing Zero classes and all others are zero.
In console at last it displays 0 classes loading and 0 classes saving like this after running the test casses.
As well I tried in BuildConfig.groovy to add,
coverage { sourceInclusions = ['grails-app/target*'] }
It's not resolving the path
So how can i solve this problem to get correct cobertura report..??
While i'm running using the above mentioned command will the code-coverage (cobertura) automatically instrument the classes or mannually we have to say..??
I go through the cobertura instrumentation ref via comment lines document..in that they used cobertura-instrumented.bat --destination ... if use these comments cobertura-instrumented.bat is not internal or external command like this error is comming..
How to correct this and make this plugin work correct to get correct result rather than 0%..??
Thanks..
Make sure your app is not running at the same time you are running the test suite. Cobertura needs to instrument your compiled code and if the app is running it can interfere with that.
Code-Coverage correctly shows the output after i changed the lib jars as asm2.2.1 to asm3.1 as well asm-util.jar and oro.jar was added on it.

Ant with JUnit task - OutOfMemoryError

I have an interesting problem. I am using Ant which executes JUnit tests (test suite composed with 50 tests) via build.xml element. The problem is that I receive OutOfMemoryError.
I have enlarged heap space using ANT_OPTS arguments but it did not help. When I execute the same test suite in Eclipse - everything is fine - memory is released thanks to GC.
I think that this problem is related to Ant and its JUnit task.
Maybe logging of the tests are the reason(but on the other hand I have printsummary="false", [maybe outputtoformatters should be set to false as well???]).
My second guess is that TEST***.xml file (generated at the end of the test) is held in memory and flushed at the end of the test. Is there any way to reduce logs which are in that file?
Guys, please give me some clues.
You need to set the maxmemory attribute in the junit task. See the Ant documentation.
to sum up I want to say that the root of the problems was logging. Logs were sent to the stream and not flushed. After switching the logs off - everything was fine.
By default the fork mode is off. Please try with fork mode on and forkmode set to pertest.

Resources