Sample TFS 2010 Build Process Template for NCover [duplicate] - tfs

I was wondering if any of you guys had any experience generating code coverage reports in TFS Build Server 2010 while running NUnit tests.
I know it can be easily done with the packaged alternative (MSTest + enabling coverage on the testrunconfig file), but things are a little more involved when using NUnit. I've found some info here and there pointing to NCover, but it seems outdated. I wonder if there are other alternatives and whether someone has actually implemented this or not.
Here's more info about our environment/needs:
- TFS Build Server 2010
- Tests are in plain class libraries (not Test libraries - i.e., no testrunconfig files associated), and are implemented in NUnit. We have no MSTests.
- We are interested in running coverage reports as part of each build and if possible setting coverage threshold requirements for pass/fail criteria.

We 've done it with NUnit-NCover and are pretty happy with our results. NUnit execution is followed by NUnitTfs execution in order to get our testing results published in the Build Log. Then NCover kicks in, generating our code coverage results.
One major thing that poses as a disadvantage is fact that setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it.
Two things could pose as disadvantages:
NUnitTfs doesn't work well with NCover (at least I couldn't find a way to execute both in the same step, so (since NCover invokes NUnit) I have to run Unit tests twice: (1) to get the test results and (2) to get coverage results over NCover. Naturally, that makes my builds last longer.
Setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it .
In any case, the resulting reporting (especially the Trend aspect) is very useful in monitoring how our code evolves within time. Especially if you 're working on a Platform (as opposed to short-timed Projects), Trend reports are of great value.
EDIT
I 'll try to present in a quick & dirty manner how I 've implemented this, I hope it can be useful. We currently have NCover 3.4.12 on our build server.
Our simple naming convention regarding our NUnit assemblies is that if we have a production assembly "123.dll", then another assembly named "123_nunit.dll" exists that implements its tests. So, each build has several *_nunit.dll assemblies that are of interest.
The part in the build process template under "If not disable tests" is the one that has been reworked in order to achieve our goals, in particular the section that was named "Run MSTest for Test Assemblies". The whole implementation is here, after some cleanups to make the flow easier to be understood (pic was too large to be directly inserted here).
At first, some additional Arguments are implemented in the Build Process Template & are then available to be set in each build definition:
We then form the NUnit args in "Formulate nunitCommandLine":
String.Format("{0} /xml={1}\\{2}.xml", nunitDLL, TestResultsDirectory, Path.GetFileNameWithoutExtension(nunitDLL))
This is then used in the "Invoke NUnit"
In case this succeeds & we have set coverage for this build we move to "Generate NCover NCCOV" (the coverage file for this particular assembly). For this we invoke NCover.Console.exe with the following as Args:
String.Format("""{0}"" ""{1}"" //w ""{2}"" //x ""{3}\{4}"" //literal //ias {5} //onlywithsource //p ""{6}""",
NUnitPath,
Path.GetFileName(nunitDLL),
Path.GetDirectoryName(nunitDLL),
Path.GetDirectoryName(Path.GetDirectoryName(nunitDLL)),
Path.GetFileName(nunitDLL).Replace("_nunit.dll", ".nccov"),
Path.GetFileNameWithoutExtension(nunitDLL).Replace("_nunit", ""),
BuildDetail.BuildNumber)
All these run in the foreach loop "For all nunit dlls". When we exit the loop, we enter "Final NCover Activities" & at first the part "Merge NCCovs", where NCover.Console.exe is executed again - this time with different args:
String.Format("""{0}\*.nccov"" //s ""{0}\{1}.nccov"" //at ""{2}\{3}\{3}.trend"" //p {1} ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject
)
When this has run, we have reached the point where all NCCOV files of this build are merged into one NCCOV-file named after the build + the Trend file (that monitors the build throughout its life) has been updated with the elements of this current build.
We now have to only generate the final HTML report, this is done in "Generate final NCover rep" where we invoke NCover.reporting with the following args:
String.Format(" ""{0}\{1}.nccov"" //or FullCoverageReport //op ""{2}\{1}_NCoverReport.html"" //p ""{1}"" //at ""{3}\{4}\{4}_{5}.trend"" ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
PathForNCoverResults,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject,
BuildType
)

Related

how to show all gtest case by bazel without "test" cmd

I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.
This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)
If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:

TFS Build log shows that the Test Impact match pattern is set to "/build/*.*/*.dll" not '*.exe"

I started a new project in VS 2013 and TFS 2013. After doing a little coding I ran some tests and created test cases in the Test Manager. The tests pass and the Test result files show the impact XML files attached. However, all the subsequent builds show no Tests Impacted. After checking the build logs I see that the Test Impact entry has a Match pattern that ends in:
"\bin***.dll", but the application is a windows form app.
Is there something I have missed in setting up the project that would cause this?
This is the output log section:
...
Run VS Test Runner00:00:00
There were no matches for the search pattern ...\bin\**\*test*.dll
There were no matches for the search pattern ...\bin\**\*test*.appx
Run optional script after Test Runner00:00:00
Inputs
EnvironmentVariables:
Enabled: True
Arguments:
FilePath:
Outputs
Result: 0
Get Impacted Tests00:00:00
There were no matches for the search pattern ...\bin\**\*.dll
A baseline build could not be located. Test impact analysis will not be
performed for this build.
Publish Symbols
...
I found the issue. For the Test Impact Analysis the build must have a drop location: The "copy output to the server" option does not count as a "drop location"!

How do I "map" certain return value of a script to "yellow" status in Jenkins?

In Jenkins there is a possibility to create free project which can contain a script execution. The build fails (becomes red) when the return level of the script is not 0.
Is there a possibility to make it "yellow"?
(Yellow usually indicates successful build with failed tests)
The system runs on Linux.
Give the Log Parser Plugin a try. That should do the trick for you.
One slightly hacky way do do it, is to alter the job to publish test results and supply fake results.
I've got a job that is publishing the test results from a file called "results.xml". The last step in my build script checks the return value of the build, copies eihter "results-good.xml" or "results-unstable.xml" to "results.xml" and then returns a zero.
Thus if the script fails on one of the early steps, the build is red. But if the build succeeds its green or yellow based on the return code it would have retunred without this hack.

Is it possible to display the call "graph" of an ant extension-point?

I have an extension-point defined in ant :
<extension-point name="foo"/>
A lot of tasks contribute to this point in several imported ant files :
<bindtargets targets="bar" extensionPoint="foo" />
However I'm kinda lost as to exactly which tasks are contributing. Is there a way to have ant report the tasks that would be triggered by a given extension point ? More generaly, is there a way to display the "call-graph" (or simply the list of dependencies) of an ant task ?
I tried using verbose options for ant (-v and such), with no luck.
Thanks
First of all, you can try to debug the ANT process in your IDE using remote debugging by adding some parameters to ANT_OPTS (mine is set in ~/.profile):
http://blog.dahanne.net/2010/06/03/debugging-any-java-application/
And profiling may help. I found project Antro on ANT Wiki...
http://sourceforge.net/projects/antro
Maybe you can try it out. The project is said to be designed for ANT, which looks promising in solving your problem.
Also you can use Yourkit Java Profiler to do a CPU profiling. YJP can show the call graph of a java application, but I'm not sure if one can find out which are ANT targets.
The following document shows how to start a java application with YJP agent.
http://www.yourkit.com/docs/95/help/agent.jsp
I know of 2 ways to get this information:
You can get the effective target/extension-point invocation sequence from Ant's console logger. To do this, place Ant's logging facility into verbose mode by passing -verbose on the command line to Ant. There are two lines, one after the other, that dump to the console immediately before most targets as they are invoked in your build script:
A line that shows a summary of the targets in the call sequence starting with the text, Build sequence for target(s) 'artifact' is [...].
A line showing the detailed call sequence (nested targets and antcalls included). This line starts with the text, Complete build sequence is [...]. This listing considers, as much as reasonably possible, the evaluation of any if and unless attributes of each target listed at the point the line is logged to the console.
Simply invoke your Ant build as you would normally with the -verbose option and your console should have the information you're looking for.
You can get a pictorial representation of the call sequence using a tool called Grand. However, it hasn't been updated for quite some time and thus doesn't support extension-points (which is what you're concerned with here). It will interpret antcall's, ant, and depend'encies. It doesn't evaluate the if and unless attributes but simply identifies potential execution sequence - more of a dependency hierarchy than an actual call graph. The project is on Github so an update to support extension-points may not be too difficult.
The graphic is rendered using Graphviz.
For an actual call sequence, use option 1.
This is pretty sloppy, but it works. Ant is actually pretty easily scripted, and if you are using at least Java 6 (or it might be Java 7), javascript support is built in and thus can be used right out of the box. This defines a task that will echo the dependencies of any target in call order:
<scriptdef name="listdepends" language="javascript">
<attribute name="target"/>
<![CDATA[
var done = [];
var echo = project.createTask("echo")
function listdepend(t) {
done.push(t.getName());
var depends = t.getDependencies();
while (depends.hasMoreElements()) {
var t2 = depends.nextElement();
if (done.indexOf(t2)==-1) listdepend(project.getTargets().get(t2));
}
echo.setMessage(t.getName());
echo.perform();
}
var t = attributes.get("target");
if (t!=null) {
var targ = project.getTargets().get(t);
listdepend(targ);
}
]]>
</scriptdef>
In your case, you can create a new target (or not) and call it like so:
<target name="listfoo">
<listdepends target="foo"/>
</target>
As I said, this is somewhat sloppy. It probably isn't very fast (although unless your target triggers thousands of others, it probably isn't noticeably slow). It won't handle antcall tasks (although it could be modified to do so easily) or respond to if and unless attributes. If dependencies nest too far, it may hit a recursion depth limit (but I doubt any project nest them deep enough).
The array is used to make sure that each dependency is listed once (ant would only run them once).

TFS 2010 RC: How to fail a build for low code coverage?

How can I cause a build to fail when code coverage is below a certain threshold?
The main issue is that the code coverage results file that MSTest produces is in a binary format. However, assuming that things haven't changed too much in VS2010, you should be able to use this utility to convert it into an XML file:
http://codeexperiment.com/file.axd?file=2008%2f9%2fCodeCoverageConverter.zip
NOTE: You'll probably need to recompile it against the VS2010 version of 'Microsoft.VisualStudio.Coverage.Analysis.dll.
You can then use your preferred method of parsing that XML file, doing the maths for each of the instrumented assemblies to calculate an overall coverage ratio. The XPaths you're interested in (at least for VS2008) are:
/CoverageDSPriv/Module/LinesCovered
/CoverageDSPriv/Module/LinesNotCovered
If you want to do that last step in pure MSBuild, then the 'XmlRead' and 'Math' tasks contained within the MSBuild Community Tasks library should be sufficient:
http://msbuildtasks.tigris.org/
Once you have the overall ratio in an MSBuild property, you then simply use a conditional task to break the build if that number is lower than your desired threshold.
<Error Condition=" $(CodeCoverageRatio) < $(MinCodeCoverage) "
Text="Code Coverage is below required threshold." />
There is very likely a way to do this with a build task (particularly if you are willing to roll your own). Hopefully someone will post some sample code for you.
If not, I have been impressed with NDepend for this type of task. You can write in a very self-explanatory, SQL-like syntax to determine all sorts of metrics about your code and warn or fail a build based on thresholds.
Examples:
WARN IF Count > 0 IN SELECT METHODS WHERE CodeWasChanged AND PercentageCoverage < 95
WARN IF Count > 0 IN SELECT METHODS WHERE IsPublic AND IsInOlderBuild AND WasRemoved
Ancient question, but not marked as answered. Take a look at this

Resources