I have a jenkins Job that have a script that writes an xml report. The xml is equal to this example:
<testng-results>
<reporter-output>
</reporter-output>
<suite name="suite0" failures="0" tests="0">
<test name="testcase1">
<class name="testcase1.class">
<test-method name="test1" started-at="2013-10-07T16:20:51Z" finished-at="2013-10-07T16:20:53Z" duration-ms="2754" status="PASS">
</test-method>
</class>
</test>
<test name="test2">
<class name="test2.class">
<test-method name="testcase2" started-at="2013-10-07T16:21:14Z" finished-at="2013-10-07T16:21:19Z" duration-ms="4163" status="FAIL">
<short-stacktrace>description of the error with a lot of information....
</short-stacktrace>
</test-method>
</class>
</test>
</suite>
</testng-results>
This xml is readed by the testNgplugin and shows me the tests with duration, nº of fails and passed. But the failed tests doesn't show me the stacktrace...¿There is a way to show the description of the error to the user in Jenkins? What tag must i use?
Related
I have uploaded the testng-results.xml in a git branch and am using the git repository as a workspace for the Jenkins job.
Since the tests I run on Jenkins are regression tests I would expect the TestNG reports to vary in any event there was a regression defect. I noticed that the TestNG reports displayed on Jenkins is just a readable version of the testng-results.xml file i.e. its a exact copy of the testng-results.xml file in my workspace.
I have changed the test script ChromeTest.f() to purposely fail and still the TestNG results in Jenkins marks it as PASS as per below image:
.
Please find the testng-results.xml file below:
<?xml version="1.0" encoding="UTF-8"?>
<testng-results skipped="0" failed="0" ignored="0" total="9" passed="9">
<reporter-output>
</reporter-output>
<suite name="Default suite" duration-ms="159837" started-at="2017-09-12T16:48:30Z" finished-at="2017-09-12T16:51:10Z">
<groups>
</groups>
<test name="Default test" duration-ms="159837" started-at="2017-09-12T16:48:30Z" finished-at="2017-09-12T16:51:10Z">
<class name="IETests">
<test-method status="PASS" signature="beforeTest()[pri:1, instance:IETests#7921b0a2]" name="beforeTest" duration-ms="18341"
started-at="2017-09-12T09:48:30Z" finished-at="2017-09-12T09:48:49Z">
<reporter-output>
</reporter-output>
</test-method> <!-- beforeTest -->
<test-method status="PASS" signature="f()[pri:2,
instance:IETests#7921b0a2]" name="f" duration-ms="99893" started-
at="2017-09-12T09:48:49Z" finished-at="2017-09-12T09:50:29Z">
<reporter-output>
</reporter-output>
</test-method> <!-- f -->
<test-method status="PASS" signature="afterTest()[pri:3,
instance:IETests#7921b0a2]" name="afterTest" duration-ms="115"
started-at="2017-09-12T09:50:29Z" finished-at="2017-09-12T09:50:29Z">
<reporter-output>
</reporter-output>
</test-method> <!-- afterTest -->
</class> <!-- IETests -->
<class name="ChromeTest">
<test-method status="PASS" signature="beforeTest()[pri:4,
instance:ChromeTest#1efbd816]" name="beforeTest" duration-ms="12606"
started-at="2017-09-12T09:50:29Z" finished-at="2017-09-12T09:50:41Z">
<reporter-output>
</reporter-output>
</test-method> <!-- beforeTest -->
<test-method status="PASS" signature="f()[pri:5,
instance:ChromeTest#1efbd816]" name="f" duration-ms="1087" started-
at="2017-09-12T09:50:41Z" finished-at="2017-09-12T09:50:42Z">
<reporter-output>
</reporter-output>
</test-method> <!-- f -->
<test-method status="PASS" signature="afterTest()[pri:6,
instance:ChromeTest#1efbd816]" name="afterTest" duration-ms="243"
started-at="2017-09-12T09:50:43Z" finished-at="2017-09-12T09:50:43Z">
<reporter-output>
</reporter-output>
</test-method> <!-- afterTest -->
</class> <!-- ChromeTest -->
<class name="FirefoxTest">
<test-method status="PASS" signature="beforeTest()[pri:7,
instance:FirefoxTest#6a2bcfcb]" name="beforeTest" duration-ms="24220"
started-at="2017-09-12T09:50:43Z" finished-at="2017-09-12T09:51:07Z">
<reporter-output>
</reporter-output>
</test-method> <!-- beforeTest -->
<test-method status="PASS" signature="f()[pri:8,
instance:FirefoxTest#6a2bcfcb]" name="f" duration-ms="3212" started-
at="2017-09-12T09:51:07Z" finished-at="2017-09-12T09:51:10Z">
<reporter-output>
</reporter-output>
</test-method> <!-- f -->
<test-method status="PASS" signature="afterTest()[pri:9,
instance:FirefoxTest#6a2bcfcb]" name="afterTest" duration-ms="83"
started-at="2017-09-12T09:51:10Z" finished-at="2017-09-12T09:51:10Z">
<reporter-output>
</reporter-output>
</test-method> <!-- afterTest -->
</class> <!-- FirefoxTest -->
</test> <!-- Default test -->
</suite> <!-- Default suite -->
</testng-results>
Jenkins is just triggering the test in command line and then, like you mention parsing and displaying the xml results file.
Try to run it out of Jenkins, say using Maven, do you see that it fails as you expect? if so, the issue is not related to Jenkins itself and I would concentrate on the test code itself
I solved my problem by using Maven and Junit testing tools.
Also I added my entire project workspace in git before running the test.
After the build ran in Jenkins I could see the results for each build.
Thank you all for helping
Here is the Test Result Jenkins shows me. The names are empty.
But in the XML, that was generated and loaded in the Post Build from Jenkins, the test names are shown and correct:
<?xml version="1.0" encoding="utf-8"?>
<assemblies>
<assembly name="xyz" environment="32-bit .NET 4.0.30319.42000 [collection-per-class, parallel (8 threads)]" test-framework="xUnit.net 2.1.0.3179" run-date="2016-02-29" run-time="10:17:15" config-file="xyz" total="4" passed="2" failed="0" skipped="2" time="46.081" errors="0">
<errors />
<collection total="4" passed="2" failed="0" skipped="2" name="xyz" time="45.641">
<test name="CashFactorSetInValidValues" type="XUnit_DataManager_Tests.DataManagerGuiGeneral" method="CashFactorSetInValidValues" time="22.7359448" result="Pass">
<traits>
<trait name="DataManager" value="General" />
<trait name="General" value="CashFactor" />
</traits>
</test>
<test name="TestCurrencySetAll" type="XUnit_DataManager_Tests.DataManagerGuiGeneral" method="TestCurrencySetAll" time="0" result="Skip">
<traits>
<trait name="DataManager" value="General" />
<trait name="General" value="Currency" />
</traits>
<reason><![CDATA[Eine Ausnahme vom Typ "Xunit.SkipException" wurde ausgelöst.]]></reason>
</test>
<test name="TestCurrencyAllAvailable" type="XUnit_DataManager_Tests.DataManagerGuiGeneral" method="TestCurrencyAllAvailable" time="0" result="Skip">
<traits>
<trait name="DataManager" value="General" />
<trait name="General" value="Currency" />
</traits>
<reason><![CDATA[Eine Ausnahme vom Typ "Xunit.SkipException" wurde ausgelöst.]]></reason>
</test>
<test name="CashfactorSetValidValues" type="XUnit_DataManager_Tests.DataManagerGuiGeneral" method="CashfactorSetValidValues" time="14.8607297" result="Pass">
<traits>
<trait name="DataManager" value="General" />
<trait name="General" value="CashFactor" />
</traits>
</test>
</collection>
</assembly>
</assemblies>
I am using the new XUnit Function [SkippableFact] and 2 of the Tests I skipped. The Result.xml is ok and shows exactly what happened, but it can't be interpreted as it seems.
EDIT: I tested it without the SkippableFact and it still doesn't work.
I want to execute test script on multiple devices(Android). when i run my java class with JUnit i am able to execute in only one device. How to execute in multiple devices at a time.
Any suggestion would be appreciated.
TestNG.xml file
<suite name="Default suite" thread-count="2" parallel="tests">
<test name="Nexus">
<Parameters>
<parameter name="platform" value="Nexus"/>
<parameter name="browsername" value="Android"/>
<parameter name="udid" value="xyz" />
<parameter name="remoteurl" value="http://0.0.0.0:4723/wd/hub"/>
</Parameters>
<classes>
<class name="AppiumTest">
<methods>
<include name="Test1"/>
<include name="Test2"/>
<include name="Test3"/>
</methods>
</class>
</classes>
</test>
<test name="Moto E">
<Parameters>
<parameter name="platform" value="Moto E"/>
<parameter name="browsername" value="Android"/>
<parameter name="udid" value="abc" />
<parameter name="remoteurl" value="http://0.0.0.0:4726/wd/hub"/>
</Parameters>
<classes>
<class name="AppiumTest">
<methods>
<include name="Test1"/>
<include name="Test2"/>
<include name="Test3"/>
</methods>
</class>
</classes></suite>
If you use testNG instead of JUnit you can create a test suite with a testng.xml file that should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<suite name="Suite" parallel="tests" thread-count="2">
<test name="Nexus 7">
<parameter name="udid" value="XXXX" />
<classes>
<class name="testNG.TestOne"/>
</classes>
</test> <!-- Test -->
<test name="HTC desrire">
<parameter name="udid" value="XXXX" />
<classes>
<class name="testNG.TestOne"/>
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->
Stating parallel tests and a thread-count of 2 allows two tests to be completed on seperate devices in parallel.
All you need to do from here is configure Selenium Grid nodes with capabilities of each device and in your tests script use the udid parameter passed in through the testng.xml.
Hope this helps.
Just launch another test after the first is launched. Of course each test must be pointed to different device.
Try to apply the concept shown in the below thread.It is using grid concept in selenium to start two appium sessions in parallel.By this we can run our scripts parallely on two android devices.
https://discuss.appium.io/t/connecting-appium-server-to-selenium-grid-for-android/804/10
There are many testng.xml files which I run via same build.xml using command line option.
I want build.xml to parse the test name from each testng.xml that is passed via command line option
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="tests" verbose="1" threads="10" preserve-order="true">
<test name="first" preserve-order="true">
<classes>
<class name="com.etc.first">
<methods>
<include name="First" />
</methods>
</class>
</classes>
</test>
<test name="second" preserve-order="true">
<classes>
<class name="com.etc.Second.java">
<methods>
<include name="Req" />
</methods>
</class>
</classes>
</test>
</suite>
<target name="test" depends="compile-test">
<junit failureProperty="test.failure">
<classpath refid="classpath.test" />
<formatter type="brief" usefile="false" />
<batchtest>
<fileset dir="${tst-dir}" includes="**/Test*.class" />
</batchtest>
</junit>
<fail message="test failed" if="test.failure" />
</target>
I want to print how many test cases are:
failed
error
passed
by making changes only in the build.xml file. How can I do that?
You can use the junitreport task to gather your test results.
If you need to print the summary metrics in your build file, you could use a filter chain to extract the information from the generated report.
There may (must?) be a simpler way to do this, but I didn't see it.
<target>
<junit failureProperty="test.failure">
<classpath refid="classpath.test" />
<!-- use XML formatter and let it output to file -->
<formatter type="xml" />
<!-- specify output dir for test result files -->
<batchtest todir="tmp/results">
<fileset dir="${tst-dir}" includes="**/Test*.class" />
</batchtest>
</junit>
<!-- generate report with junitreport -->
<junitreport todir="tmp">
<fileset dir="tmp/results" />
<report todir="tmp/report" />
</junitreport>
<!-- concat the report through a filter chain to extract what you want -->
<concat>
<fileset file="tmp/report/overview-summary.html" />
<filterchain>
<linecontainsregexp>
<regexp pattern='title="Display all tests"' />
</linecontainsregexp>
<tokenfilter>
<!-- escaped values of < and > are ">" and "<" -->
<replaceregex pattern='.*all tests.*>(\d+)<.*all failures.*>(\d+)<.*all errors.*>(\d+)<.*$' replace="Run: \1, Failed: \2, Errors: \3" />
</tokenfilter>
</filterchain>
</concat>
<fail message="test failed" if="test.failure" />
</target>
The output will be something like:
Buildfile: C:\\test\unit_test.xml
test:
[junit] Test MyUnitTest FAILED
[junit] Test MyUnitTest2 FAILED
[junitreport] Processing C:\\test\tmp\TESTS-TestSuites.xml to C:\DOCUME~1\xxx\LOCALS~1\Temp\1\null1075123857
[junitreport] Loading stylesheet jar:file:/C:/eclipse/eclipse-jee-ganymede-SR2-win32/eclipse/plugins/org.apache.ant_1.7.0.v200803061910/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
[junitreport] Transform time: 906ms
[junitreport] Deleting: C:\DOCUME~1\xxx\LOCALS~1\Temp\1\null1075123857
[concat] Run: 8, Failed: 4, Errors: 1
BUILD FAILED
C:\test\unit_test.xml:32: test failed
Total time: 1 second
If you are running a large number of tests, you will now have the overhead of report generation extraction.
Exactly the same answer as sudocode's but with this line to parse the report (works for RSA 8.5.1 / JUnit 4.11 - and I was not allowed to place this piece of code as comment in sudocodes code - nor am I allowed to comment... ):
<replaceregex pattern='<td><a title="Display all tests" href="all-tests.html">(\d+)</a></td><td><a title="Display all failures" href="alltests-fails.html">(\d+)</a></td><td><a title="Display all errors" href="alltests-errors.html">(\d+).*$' replace="Run: \1, Failed: \2, Errors: \3" />
Thanks to sudocode!
You can use junitreport to generate a combined XML report from a set of test runs.
To then generate a textual summary you can create an XSLT and use the ant XSLT target to format the file. This will produce an output file, but you can use ant to read this in and echo it to the console.
The XSLT should use something along the lines of to count the testcases, errors and failures.
count(//testsuites/testcase)
count(//testsuites/testcase/error)
count(//testsuites/testcase/error)
(If you really only want to modify your ant build file you could generate the XSLT to a temp folder at build time and remove it afterwards.)
As an alternative to sudocodes-approach: You could set the printsummary attribute to the junit-task. This will print a summary after each test-class. It's not an overall summary.
<junit failureProperty="test.failure" printsummary="yes">
<classpath refid="classpath.test" />
<formatter type="brief" usefile="false" />
<batchtest>
<fileset dir="${tst-dir}" includes="**/Test*.class" />
</batchtest>
</junit>
Try this attribute on your junit task:
printsummary="yes"
For a slick javadoc-like html report, change your formatter to:
<formatter type="xml" />
and then create the reports with a target that calls this:
<junitreport>
<fileset dir="${report.dir}/tmp">
<include name="TEST-*.xml" />
</fileset>
<report format="frames" styledir="${junitxslt.dir}" todir="${report.dir}/html" />
</junitreport>
Try to embed Jsoup into a custom task and use the task in your build to extract the data you need from the overview-summary.html.
My code snippet below -
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.Task;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
public class ExtractJunitSummaryTask extends Task {
protected String overviewSummaryHtml;
protected String outProperty;
public void execute() throws BuildException {
StringBuffer sb = new StringBuffer();
// TODO: read and put overviewSummaryHtml file content into a String
Document doc = Jsoup.parse(sb.toString());
String allTests = doc.getElementsByAttributeValueContaining("href", "all-tests").text();
String allFailures = doc.getElementsByAttributeValueContaining("href", "alltests-fails").text();
String allErrors = doc.getElementsByAttributeValueContaining("href", "alltests-errors").text();
String allSkippedTests = doc.getElementsByAttributeValueContaining("href", "alltests-skipped").text();