Automated Test Results - JUnit -> How to do ordinary grouping/formatting? - tfs

I am new around here, I have done a lot of googling, searching on this site and asking around and have not found a satisfactory answer.
I develop automated tests, UI as well as API. These are then run by TFS and the results are put into a JUnit xml document, which is then read by TFS. But alas the formatting is atrocious and leaves one unable to use the output for anything.
Vis:
There is no information about the Test Suite (which is there in the XML), the actual request sent (which is in the log) or response received and so one is left with absolutely no context to understand what has actually taken place (which request was sent, what test group/suite it belongs to and what any potential error was).
As far as I have been able to uncover, TFS simply has little to no support of proper test result formatting when it comes to automated testing. I am very surprised by this in 2018. Not even any documentation that allows me to develop my report structure/format in some kind of script.
What alternatives do I have? Can I automatically attach a generated HTML report somehow in TFS? Can I output more info anywhere?

You can group by Test Suite, Owner, Priority etc...
And you can double click the specific test result and navigate to the test run Summary to see more information there, also you can attach files there.
More information please see Review continuous test results after a build

Related

Is it possible to publish arbitrary JUnit results to SonarQube?

I am running sitespeed.io tests in Jenkins, and have configured it to output junit format test results.
I'd now like to publish those test results to SonarQube (I realise I can publish them in Jenkins, but I have a requirement to keep everything in one place as much as possible).
However, when I add the test results file into the Sonar analysis (using sonar.junit.reportsPath=/path/to/sitespeed-results, having named my results file TESTS-sitespeed.xml) SonarQube doesn't seem to show any results on its dashboard.
I understand that SonarQube also has a setting to configure the location of test files, and this is often cited as a reason for test results not being ingested correctly, and that leads me to wonder whether what I'm trying to do is possible.
Any help would be greatly appreciated.
Thanks,
Jez
the property sonar.junit.reportsPath will read and parse the report but will only save the information if the class name indicated by the report can be mapped back to a java resource of the project.
I have no idea about the output of your test in surefire format but I sense the classname won't match any resource of your project and so your report is parsed but data is ignored.

HTTP access to on-going Jenkins build files

I guess the title is pretty self-explanatory. The reason I want that is so that I can make a live custom HTML reporter for my tests.
My test suite takes hours to complete, and although the tests generate HTML reports as soon as each test step is executed, it's only at post-build time that those report files get published.
Being able to see them as they get generated would reduce the time it takes for me and my teammates to analyze and act upon issues revealed by our test runs.
All I need is that Jenkins let me access the build files as the build executes. Nothing fancy; I can take care of the rest. Is that possible? How?
In our setup there is always an intermediate file (typically XML) but the HTML files are created at the end of the job.
What you can do, is use the progressive output (http://jenkins/job/jobName/buildNumber/logText/progressiveText?start=0). Although you don't state which framework you use, most of them output something that would be easy to parse. e.g. "Test xxx failed".

Jenkins - view results in web browser

My Jenkins job runs many tests that create log files. In case of failure, I want to look at the log of the failed test. I'd rather use Jenkins web-server to do it, even have a link in the email it sends me.
Is there any plugin that can do it? Or maybe another way?
You provide few details in your question, so it is impossible to give specific advice. In a general level: this is already possible. When your test framework creates JUnit XML files with test results, the test output can be included between the <failure> and </failure> tags. Usually test frameworks should take care of this automatically, so you are probably not using a test framework and are manually generating the XML files containing test results?
I recommend you adopt some test framework. It is usually well worth the effort.

How to make a FitNesse test require explicit running

Is there a way to mark a fitnesse test such that it will not be run as part of a suite, but it can still be run manually?
We have our FitNesse tests running as part of our continuous integration, so new tests that are not yet implemented cause the build to fail. We'd like a way to allow our testers and BAs to be able to add new tests that will fail while still continuing to validate the existing tests as part of continuous integration.
Any suggestions?
The best way to do this is with suite tags. You can mark tests with a tag from the properties page and then you can filter for the or filter to exclude them.
In this case I would exclude with "NotOnCI" tag. Then add the following argument to the URL:
ExcludeSuiteFilter=NotOnCI
This might look like this then as the full URL:
Http://localhost:8080/FrontPage?test&ExcludeSuiteFilter=NotOnCI
You can select multiple tags by splitting with commas, but they act as "or",Not "and".
Check the FitNesse user guide for more details. http://fitnesse.org/FitNesse.UserGuide.TestSuites.TagsAndFilters
Would it make sense to have multiple Suites, one for regression tests that should always pass, and another one for the tests that are not yet implemented?
Testers and BAs can add tests/suites to the latter suite and the CI server only runs tests in the former suite.
Once a developer believes he has implemented the behavior they can move the test/suite relating to that functionality to the 'regression' suite so that it will be checked in continuous integration.
This might make the status of a test/suite a bit more explicit/obvious than just having a tag. It would also provide a clear handover from development to test/BA to indicate the implementation is finished.
If you just want to have a test/suite not run during an overall run of a suite that contains the particular test/suite you could also just tick 'Skip (Recursive)' in the properties page of that test/suite (below 'Page Type').

Is it possible to configure CruiseControl.net to display task exit codes in results?

My company is developing a web application that builds in ant. I've been tasked with getting CruiseControl.net to differentiate between a build failure and a unit test failure, which it can't do natively. ( It currently lumps both together but doesn't help developers understand what's broken )
I have CC.net call a script that returns specific exit codes depending on the nature of an ant task failure. I'd like these exit codes to be reflected in the CC.net failure report / dashboard but am having some trouble finding resources on how this might be done.
Any suggestions?
Not directly. All the reports and display works from information in the logs which are XML files. The display and reports work by applying XSLT to these XML files.
Take a look at your build logs and unit test logs, to see if each of those process write the failure information to their respective log files.
If they do, you should be able to write a custom XSLT or modify the existing XSLT to display that information.
Edit:
A different approach based on your comment. You could probably redirect the ANT error code to a file. Then you could have a seperate ccnet task that takes the error code from that file and re-format and display it (depending on how/where you want it displayed)

Resources