The Bitbucket pipelines test tab is excluding the "<system-out>" section of my Junit test reports - bitbucket

I have configured my tests per the docs: https://support.atlassian.com/bitbucket-cloud/docs/test-reporting-in-pipelines/
I'm generating test reports with the console output captured though and it appears Bitbucket insists on excluding this output from the test tab.
I can find no documentation from Bitbucket on how this is configurable.
My report looks like this:
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="0" failures="1" skipped="0" tests="1" time="3.402" timestamp="2023-01-28T20:38:30.862709" hostname="6745580f0e58"><testcase classname="tests.mytest" name="test_force_error" time="0.090"><failure message="ValueError: Some error.">Traceback (most recent call last):
File "/workspaces/app/tests/mytest.py", line 29, in test_force_error
assert blah.blah(
File "/workspaces/app/tests/mytest.py", line 91, in blah
raise ValueError(
ValueError: Some error.</failure><system-out>--------------------------------- Captured Log ---------------------------------
--------------------------------- Captured Out ---------------------------------
I WANT TO SEE THIS IN THE TEST TAB
</system-out><system-err>--------------------------------- Captured Err ---------------------------------
WARNING: I WANT TO SEE STDERR TOO
</system-err></testcase></testsuite></testsuites>
I the test tab in the bitbucket UI everything after </failure> is not shown.
How do I have bitbucket include </system-out> and <system-err>? I deliberately included this in the test reports so I could view it in the test tab for each test individually and not have to pour over the entire test output to see it.

The pipeline only detects files, as stated in the documentation you linked, it does not read your console logs. If you don't have option for a pre-made tester that can output such files you can always write the file yourself, as you have attempted but into a file.

Related

Jenkins PyLint Warnings tool parses log files but reports 'found 0 issues'

I have setup Jenkins to run pylint on all python source files and all the log files are generated (apparently correctly) into a sub-directory as follows:
Source\pylint_logs\pylint1.log, pylint2.log, ..., pylint75.log
I have included a --msg-template definition based on the instructions on my Jenkins Configure page: Post-build Actions->Record compiler warnings and static analysis results->Static Analysis Tools. The template is shown as:
msg-template={path}:{line}: [{msg_id}, {obj}] {msg} ({symbol})
An example of one of the log files being generated by Jenkins/pylint is as follows:
************* Module FigureView
myapp\Views\FigureView.py:1: [C0103, ] Module name "FigureView" doesn't conform to snake_case naming style (invalid-name)
myapp\Views\FigureView.py:30: [C0103, FigureView.__init__] Attribute name "ax" doesn't conform to snake_case naming style (invalid-name)
------------------------------------------------------------------
Your code has been rated at 8.57/10 (previous run: 8.57/10, +0.00)
For the PyLint Report File Pattern, I have: Source/pylint_logs/pylint*.log
It appears that PyLint Warnings is parsing the files because the console output looks like this:
[PyLint] Searching for all files in 'D:\Jenkins\workspace\PROJECT' that match the pattern 'Source/pylint_logs/pylint*.log'
[PyLint] -> found 75 files
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint1.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint10.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
This repeats for all 75 files, even though there are plenty of issues in the log files.
What is odd, is that when I was first prototyping the use of Jenkins on this project, I set it up to just run pylint on a single file. I ran across another StackOverflow post that showed a msg-template that allowed me to get it working (unable to get pylint output to populate the violations graph). I even got the graph to show up for the PyLint Warnings Trend. I used the following definition per the post:
msg-template={path}:{line}: [{msg_id}({symbol}), {obj}] {msg}
Note that this format is slightly different from the one recommended by my Jenkins page (shown earlier). Even though this worked for a single file, neither template now seems to work for multiple files, or else there is something other than the template causing the problem. My graph has flat-lined, and I always get 0 issues reported.
I have had trouble finding useful documentation on the Jenkins PyLint Warnings tool. Does anyone have any ideas or pointers to documentation I can research further? Thanks much!
Ensure pass output-format parameter in pylint command. Example:
pylint --exit-zero --output-format=parseable module1 module2 > pylint.report
you have to set the Pylint's option --message-template in .pylintrc as
msg-template={path}: {line}: [{msg_id} ({symbol}), {obj}] {msg}
output-format=text

OSSEC Slack Integration

I want all OSSEC notifications to be routed to a Slack room instead of email. 2.9.Beta5 has a ossec-slack.sh active response script. The relevant parts of my ossec.conf are:
<command>
<name>ossec-slack</name>
<executable>ossec-slack.sh</executable>
<expect>srcip</expect>
<timeout_allowed>no</timeout_allowed>
</command>
<active-response>
<command>ossec-slack</command>
<location>local</location>
<level>1</level>
</active-response>
This works for SSH logins (failed and successful), but as far as I can tell doesn't trigger anything else. What am I doing wrong/how are others doing this? Is this just beta software being beta software?
First make sure your ossec-slack.sh file has the correct information in the top:
# FILE: /var/ossec/active-response/bin/ossec-slack.sh
SLACKUSER="ossec"
CHANNEL="#slack_chanel" # include the hash "#"
SITE="https://hooks.slack.com/services/TOKEN"
SOURCE="ossec2slack"
Your "SLACKUSER" is the same as the "Customize Name" field that you set in your Slack WebHook Integrations page.
Now that your ossec-slack.sh file is set up you can test your Slack integration manually:
/var/ossec/active-response/bin/ossec-slack.sh
Running the script manually will post recent entries from your alerts log file:
/var/ossec/logs/alerts/alerts.log
When this script is triggered as an active-response, it will only post the information for the current alert, rather than posting from your log file.
When you have verified that you can post Slack messages manually, add the following XML blocks to your ossec.conf file:
<!-- FILE: /var/ossec/etc/ossec.conf -->
<ossec_config>
<command>
<name>ossec-slack</name>
<executable>ossec-slack.sh</executable>
<expect></expect> <!-- no expect args required -->
<timeout_allowed>no</timeout_allowed>
</command>
<active-response>
<command>ossec-slack</command>
<location>local</location>
<level>3</level>
</active-response>
</ossec_config>
The settings above will post to your Slack channel whenever a level 3 or above alert is triggered.
Note: no arguments are required within the <expect> tag. But the <expect> tag itself, is required. See OSSEC's active-response documentation for more information.
To test this integration, restart your ossec server:
/var/ossec/bin/ossec-control restart
You should see the "OSSEC Started" alert very quickly:
If you do not see the alert, check your logs for any misconfigurations:
tail /var/ossec/etc/logs/ossec.log
tail /var/ossec/logs/active-responses.log
Not a full answer, but adding on here. To ensure this works, make sure you don't have this set in /var/ossec/etc/ossec.conf. If it's there, just remove.
<active-response>
<disabled>yes</disabled>
</active-response>

Sonar not using lcov file

I have a Jenkins job that is using the "Invoke standalone Sonar analysis" for a javascript project.
I thought it was working fine with the following parameters:
sonar.sources=src
sonar.language=js
sonar.dynamicAnalysis=reuseReports
sonar.javascript.jstestdriver.coveragefile=target/test-coverage/jscover.lcov
sonar.javascript.lcov.reportPath=target/test-coverage/jscover.lcov
But then I noticed that the numbers that are being reported in Sonar do not match the number in the lcov file.
When I log into to Sonar I see the code coverage number as 30%.
But when I examine the lcov file, I get completely different numbers:
$lcov --summary target/test-coverage/jscover.lcov
...
lines......: 48.1%
functions..: 41.7%
branches...: no data found
And in fact, when I view the jscover.html report file, I see the total coverage at 48%.
Sonar reports it at 30%.
And drilling down into the individual files, Sonar's results do not match the results in the lcov file either.
For instance:
Just by looking at a particular file, /src/js/models/Call.js, lcov says it’s at 97% code coverage.
But Sonar displays this:
49.0% by unit tests Line coverage:97.0% (97/100)Branch coverage:0.0% (0/98)
It’s as if Sonar is using the Branch Coverage AND the Line Coverage Stats to get the final code coverage results at 49.0%.
Do you know what I am doing wrong? Do you know why Sonar is not using the coverage results from the lcov file? Is it because the Branch Coverage has no data?
Thanks for any insight on this.
Code coverage is recomputed by SonarQube. SonarQube just retrieves from the report whether a line is covered or not by unit tests. Example:
DA:10,0 => it means that line 10 is not covered
DA:20,1 => it means that line 20 is covered
DA:30,5 => it means that line 30 is covered
Then SonarQube recomputes the code coverage:
Number of covered lines / (Number of covered lines + Number of uncovered lines)

Getting Statistics to show up in TC

I've setup teamcity with my sln file and got the unit tests to show up with the CppUnit plugin that teamcity has. And I get test results in the TeamCity UI.
Now I'm trying to get trending reports to show up for my unit tests and code coverage.
As of code coverage, we're using vsinstr.exe and vsperfmon.exe which produces an XML file.
I'm not quite sure as of what steps I should be taking to make the trending reports and code coverage(not as important) to show up.
I've already seen this post, but the answer seems to require editing the build script, which I don't think would work for my case since I'm building through MSBuild and the .sln file, and the tests are being ran through that build.
So basically I'm trying to get the Statistics tab to show up, and I'm not sure where to begin.
Just add simple Powershell step into your build configuration. Something like this:
function TeamCity-SetBuildStatistic([string]$key, [string]$value) {
Write-Output "##teamcity[buildStatisticValue key='$key' value='$value']"
}
$outputFile = 'MetricsResults.xml'
$xml = [xml] (Get-Content $outputFile)
$metrics = $xml.CodeMetricsReport.Targets.Target[0].Modules.Module.Metrics
$metrics.Metric
| foreach { TeamCity-SetBuildStatistic "$($_.Name)" $_.Value.Replace(',', '') }
It uses XML output from FxCop Metrics. You have to update the script for your actual schema.

JUnit/Ant XML file format for stdout

I'm writing my own test program and I want to be able to re-use tools like Hudson for displaying the results of the test cases. I've so far gotten the results of the text file all into the same XML file and with success, failure, and errors.
Now I want to add the output of the test into the file. I have it set up so I can get the output of a test for each test individually, but I can't seem to figure out how to get it into the XML file in a way Hudson will recognize.
I want to do something like this...
<testsuite>
<testcase>
<success classname="...">
<stdout>
This is standard output
</stdout>
</success>
</testcase>
</testsuite>
But this doesn't get recognized. I see in the Ant source code that it's defined as "system-out", but I also see that it seems it wants the file in this format.
<testsuite>
<testcase classname="..." />
<system-out>
This is standard output
</system-out>
</testsuite>
Is there anyway to make this file so that I can have a specific stdout for each test case? Or do I have to make a new testsuite for every test case?
Edit: I seem to be able to get this format to work, but I'm still disappointed that I can't print the output during a success. I'd like it it while browsing tests, someone could see the output of that test.
<testsuite>
<testcase name="...">
<failure message="shows up as error message">
standard out (shows up as stacktrace)
</failure>
</testcase>
</testsuite>
Is there anywhere that shows what format Hudson accepts? I feel bad commiting bad revisions to source control just to get it to run on the automated build server.
I also can't seem to find where inside of Hudson the code for this functionality is.
Yes, you can not use the "success".
Code for this functionality is https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/hudson/tasks/junit/CaseResult.java

Resources