I've setup teamcity with my sln file and got the unit tests to show up with the CppUnit plugin that teamcity has. And I get test results in the TeamCity UI.
Now I'm trying to get trending reports to show up for my unit tests and code coverage.
As of code coverage, we're using vsinstr.exe and vsperfmon.exe which produces an XML file.
I'm not quite sure as of what steps I should be taking to make the trending reports and code coverage(not as important) to show up.
I've already seen this post, but the answer seems to require editing the build script, which I don't think would work for my case since I'm building through MSBuild and the .sln file, and the tests are being ran through that build.
So basically I'm trying to get the Statistics tab to show up, and I'm not sure where to begin.
Just add simple Powershell step into your build configuration. Something like this:
function TeamCity-SetBuildStatistic([string]$key, [string]$value) {
Write-Output "##teamcity[buildStatisticValue key='$key' value='$value']"
}
$outputFile = 'MetricsResults.xml'
$xml = [xml] (Get-Content $outputFile)
$metrics = $xml.CodeMetricsReport.Targets.Target[0].Modules.Module.Metrics
$metrics.Metric
| foreach { TeamCity-SetBuildStatistic "$($_.Name)" $_.Value.Replace(',', '') }
It uses XML output from FxCop Metrics. You have to update the script for your actual schema.
Related
I am building a simple CI pipeline for my python code in Jenkins using Jenkinsfile, which basically does the following things:
Creating test environment and installing dependencies.
Running static code metrics:
various raw metrics : SLOC, comment lines, blank lines, Cyclomatic Complexity, the Maintainability Index etc.
tests coverage reports using coverage
errors and style check using pylint
Testing pulled source code (Unit testing)
So lets say i have this stage for example, for "Static code metrics":
...
stage('Static code metrics') {
steps {
echo "Raw metrics"
sh ''' radon raw --json my_python_repo/ > raw_report.json
radon cc --json my_python_repo/ > cc_report.json
radon mi --json my_python_repo/ > mi_report.json
//TODO: add conversion and HTML publisher step
'''
}
}
...
As you see above, the reports are saved in .json format. I need to figure out a way to publish these reports in a pretty visual way on Jenkins dashboard.
One of the steps is to convert the .json files to HTML and then use HTML publisher plugin to publish reports but i don't know what tool to use.
If there is way to solve this or any other way to publish these reports on jenkins dashboard, please provide the solution.
Lets say content of mi_report.json is:
{"my_python_repo/code.py": {"mi": 16.42950884051172, "rank": "B"}, "my_python_repo/test.py": {"mi": 33.532817596089814, "rank": "A"}}
There is no Jenkins-ready tool to publish custom JSON as HTML.
But you can use warnings plugin to publish your metrics in the Jenkins way.
Here small instruction on how to configure Jenkins to parse radon reports.
Currently I'm refactoring our Jenkins build pipeline. In the stage of gathering our unittests I'm trying to enumerate all '**/.test.dll' files, or '.test.dll' at least. Read somewhere that this could be achieved using eachFileRecurse from the File-object.
But... all calls failed reporting FileNotFoundException.
Using the Scriptconsole on the specific slave I tried the same code and it works as expected. Adding some addition debug lines in our jenkins-file shows that the pipeline always returns false.
def TestFile(path)
{
def file = new File(path)
echo "File '${file}' exists: ${file.exists()}"
}
TestFile(WORKSPACE)
TestFile(pwd())
TestFile(BUILDPATH)
All result a 'exists: false', even though all these paths are already used during the build.
(How) can I use the File-object in a pipeline or how can I get the files I need?
Use fileExists together otherwise it will not work.
For example in your case it will be like this
echo "fileExists '${file}' exists: '${file}'"
I have two runners in my automation project as follows:
Main runner - Executes all the #ui-test tagged test cases and if a scenario is failed target/rerun.txt will be populated with the scenario location (e.g. features/Dummy.feature:22):
#RunWith(Cucumber.class)
#CucumberOptions(
features = "classpath:features",
plugin = {"pretty", "html:target/cucumber-html-report", "json:target/cucumber.json", "rerun:target/rerun.txt"},
tags = {"#ui-test", "~#ignore"}
)
public class RunCukesTest {
}
Secondary runner - Re-executes the scenarios from target/rerun.txt:
#RunWith(Cucumber.class)
#CucumberOptions(
features = "#target/rerun.txt",
plugin = {"pretty", "html:target/cucumber-html-report-rerun", "json:target/cucumber_rerun.json"}
)
public class ReRunFailedCukesTest {
}
When the execution is performed two result json files are created:
cucumber.json
cucumber_rerun.json
Jenkins will collect the results via Cucumber-JVM Reports plugin and will create a combined report.
The problem is, even if all the target/rerun.txt tests are passed in the second run, the report status will remain failed because of the cucumber.json.
Is there a way (to set up Cucumber-JVM Reports plugin or modify the upper presented runners) to overwrite cucumber.json with the results from cucumber_rerun.json and to publish only the modified cucumber.json?
Another sub-keywords: maven, java, cucumber-java8, cucumber-junit, junit
I had problem similar to yours, though, I've used single runner, handled re-runs from testNG(re-runs was one of the reasons I've switched from JUnit to TestNG) directly and as a results I had increased amount of tests in my json report.
My solution was to clean json files afterwards, despite the fact that Jenkins knows about failed tests it won't mark build as failed or as unstable.
In your particular case you may try to somehow match tests from rerun.json and exclude them from regular json report.
For parsing jsons I may recommend using Jackson FasterXML
I use Jenkins cucumber reporting latest release with below config in Jenkins.
Image Of Config In Jenkins
1st Runner
#RunWith(Cucumber.class)
#CucumberOptions(
features="FolderFeature",
glue={"Gluefolder"},
plugin={"html:target/cucumberpf-html-report",
"json:target/cucumberpf.json"}
)
public class RunPF {
}
2nd Runner
#RunWith(Cucumber.class)
#CucumberOptions(
features="Blah/Test.feature",
glue={"mygluefolder"},
plugin={"html:target/cucumber-html-report",
"json:target/cucumber.json"}
)
public class RunRA {
}
I had failed in both .json files and when it passed both were merged and updated correctly in one cucumber report.
Here is the error:
[CucumberReport] Preparing Cucumber Reports
[CucumberReport] JSON report directory is "C:\Users\ajacobs\workspace\com.mytest.framework\target\"
[CucumberReport] Copied 2 json files from workspace "C:\Users\admin\workspace\yourtest\target" to
reports directory "C:\Users\admin\.jenkins\jobs\Regression\builds\21\cucumber-html-reports\.cache"
[CucumberReport] Processing 2 json files:
[CucumberReport] C:\Users\admin\yourtest\builds\21\cucumber-html-reports\.cache\cucumber.json
[CucumberReport] C:\Users\admin\yourtest\builds\21\cucumber-html-reports\.cache\cucumberpf.json
Finished: SUCCESS
When you set up a Jenkins job various test result plugins will show regressions if the latest build is worse than the previous one.
We have many jobs for many projects on our Jenkins and we wanted to avoid having a 'job per branch' set up. So currently we are using a parameterized build to build eg different development branches using a single job.
But that means when I build a new branch any regressions are measured against the previous build, which may be for a different branch. What I really want is to measure regressions in a feature branch against the latest build of the master branch.
I thought we should probably set up a separate 'master' build alongside the parameterized 'branches' build. But I still can't see how I would compare results between jobs. Is there any plugin that can help?
UPDATE
I have started experimenting in the Script Console to see if I could write a post-build script... I have managed to get the latest build of master branch in my parameterized job... I can't work out how to get to the test results from the build object though.
The data I need is available in JSON at
http://<jenkins server>/job/<job name>/<build number>/testReport/api/json?pretty=true
...if I could just get at this data structure it would be great!
I tried using JsonSlurper to load the json via HTTP but I get 403, I guess because my script has no auth session.
I guess I could load the xml test results from disk and parse them in my script, it just seems a bit stupid when Jenkins has already done this.
I eventually managed to achieve everything I wanted, using a Groovy script in the Groovy Postbuild Plugin
I did a lot of exploring using the script console http://<jenkins>/script and also the Jenkins API class docs are handy.
Everyone's use is going to be a bit different as you have to dig down into the build plugins to get the info you need, but here's some bits of my code which may help.
First get the build you want:
def getProject(projectName) {
// in a postbuild action use `manager.hudson`
// in the script web console use `Jenkins.instance`
def project = manager.hudson.getItemByFullName(projectName)
if (!project) {
throw new RuntimeException("Project not found: $projectName")
}
project
}
// CloudBees folder plugin is supported, you can use natural paths:
project = getProject('MyFolder/TestJob')
build = project.getLastCompletedBuild()
The main test results (jUnit etc) seem to be available directly on the build as:
result = build.getTestResultAction()
// eg
failedTestNames = result.getFailedTests().collect{ test ->
test.getFullName()
}
To get the more specialised results from eg Violations plugin or Cobertura code coverage you have to look for a specific build action.
// have a look what's available:
build.getActions()
You'll see a list of stuff like:
[hudson.plugins.git.GitTagAction#2b4b8a1c,
hudson.scm.SCMRevisionState$None#40d6dce2,
hudson.tasks.junit.TestResultAction#39c99826,
jenkins.plugins.show_build_parameters.ShowParametersBuildAction#4291d1a5]
These are instances, the part in front of the # sign is the class name so I used that to make this method for getting a specific action:
def final VIOLATIONS_ACTION = hudson.plugins.violations.ViolationsBuildAction
def final COVERAGE_ACTION = hudson.plugins.cobertura.CoberturaBuildAction
def getAction(build, actionCls) {
def action = build.getActions().findResult { act ->
actionCls.isInstance(act) ? act : null
}
if (!action) {
throw new RuntimeException("Action not found in ${build.getFullDisplayName()}: ${actionCls.getSimpleName()}")
}
action
}
violations = getAction(build, VIOLATIONS_ACTION)
// you have to explore a bit more to find what you're interested in:
pylint_count = violations?.getReport()?.getViolations()?."pylint"
coverage = getAction(build, COVERAGE_ACTION)?.getResults()
// if you println it looks like a map but it's really an Enum of Ratio objects
// convert to something nicer to work with:
coverage_map = coverage.collectEntries { key, val -> [key.name(), val.getPercentageFloat()] }
With these building blocks I was able to put together a post-build script which compared the results for two 'unrelated' build jobs, then using the Groovy Postbuild plugin's helper methods to set the build status.
Hope this helps someone else.
I am trying to run xUnit tests (from an F# module, if it makes any difference) using TestDriven.NET, but whatever I do I get this error:
It looks like you're trying to execute an xUnit.net unit test.
For xUnit 1.5 or above (recommended):
Please ensure that the directory containing your 'xunit.dll' reference also contains xUnit's
test runner files ('xunit.dll.tdnet', 'xunit.runner.tdnet.dll' etc.)
For earlier versions:
You need to install support for TestDriven.Net using xUnit's 'xunit.installer.exe' application.
You can find xUnit.net downloads and support here:
http://www.codeplex.com/xunit
I tried following the suggestions, i.e. I copied the files
xunit.dll.tdnet
xunit.extensions.dll
xunit.gui.clr4.exe
xunit.runner.tdnet.dll
xunit.runner.utility.dll
xunit.runner.utility.xml
xunit.xml
to the folder with xunit.dll and I ran xunit.installer.exe. How can I get it to work?
I just figured out that I forgot to make the test a function in F# (so it was just a value). The error message can't be more misleading though!
You have two problems:
your Fact is broken:-
If you hover over the
please work
bit, you'll see something like: unit -> int
For a Fact to be picked up by an xUnit runner, it needs to yield `unit (void).
Hence, one key thing to get right first is to not return anything. In other words, replace your 123 with () (or an Assertion).
You can guard against this by putting a :unit stipulation on the test:-
[<Fact>]
let ``please work`` () : unit = 123
This will force a compilation error.
TestDriven.NET is reporting it cannot find the xunit.tdnet modules
It's critical to get step 1 right first. Then retry and the problem should be gone
If it remains...
Either try the VS-based runner which should work as long as it's installed and xunit.dll is getting to your output dir or look at the docs for your version of TD.NET for detailed troubleshooting notes (exec summary is if the .tdnet file was in your out dir or you undo and redo the xunit.installer from the folder containing the packages it should just work, esp if you are on latest)