Excluding simple log statements from jacoco coverage report - code-coverage

Some of methods in our app are more informative which majorly logs state of a particular component at a given time or informative to report an event, ex following code:
#Override
public void close() {
logger.info("Stopping Component...");
}
Since there is no point of writing test cases agains such methods, is their any way by which we can define to ignore logger from jacoco coverage reports to increase code coverage.

No, there is no such option. There is a FilteringOptions page on the JaCoCo wiki that has a list of options people would like to have for ignoring code during code coverage, but as mentioned there:
This page discusses a not yet available feature!

Related

Log4j-2 custom log level code generator

I would like Logger to have my custom logging level method implemented. For example i would like to call log.custom("custom level log"). According to the documentation it is possible but there is not enough hints for me. Can someone help me with understanding what does this command exactly do?
java -cp log4j-core-2.8.jar \
org.apache.logging.log4j.core.tools.Generate$ExtendedLogger \
com.mycomp.ExtLogger DIAG=350 NOTICE=450 VERBOSE=550 > com/mycomp/ExtLogger.java
What steps should I take after this command exits successfully? What exactly should I swap and where?
What the tool does is generate source code that you can include in your project. The intention is that you use the generated class instead of the standard Log4j2 Logger.
Before running the tool, you need to decide what to call your custom levels and where they rank, relative to the existing levels. The manual page shows a table with the int values of the built-in levels. The int value of your custom level will probably be in between these values.
In the quoted example, the tool will generate a class named ExtLogger in the com.mycomp package that extends the standard Log4j2 Logger with three custom levels (DIAG, NOTICE and VERBOSE). DIAG's int value is 350 so it sits between WARN (300) and INFO (400).
The tool writes the generated source code to the console. The example shows how you can redirect that output to a file. You can then include this file in your project.

How does jUnit Task get its information

I understand that if I want xml output from my jUnit tests I can use the Ant task "jUnit" to generate such output.
I want to extend the amount of information shown in this generated xml file.
I have additional information in my class file that I want to be available in the XML file as well.
My questions are:
Where is the information in the xml file coming from?
Is the information coming from the specific jUnit runner class that
is used to run the tests?
Does the jUnit task only format the received information or is it
generating the information itself?
Does the jUnit Ant task change the received information? (So it will
only show specific information and filter out everything I want to
add)
The JUnit task's XML formatter is implemented in the class org.apache.tools.ant.taskdefs.optional.junit.XMLJUnitResultFormatter, a listener that receives events during the execution of the JUnit tests. For example, when a "test ended" event is received, the formatter appends an XML element in memory for the test. Here's a link to the source code.
Based on what I read from the code:
The schema of the XML is defined in the Ant task's code. Specifically the above mentioned class. The schema is discussed in Spec. for JUnit XML Output. The content of the report, e.g. test name and class name are fetched from the JUnit test classes themselves. Here's a Javadoc for the method that retrieves the test name:
JUnit 3.7 introduces TestCase.getName() and subsequent versions of JUnit remove the old name() method. This method provides access to the name of a TestCase via reflection that is supposed to work with version before and after JUnit 3.7.
since Ant 1.5.1 this method will invoke "public String getName()" on any implementation of Test if it exists.
Since Ant 1.7 also checks for JUnit4TestCaseFacade explicitly. This is used by junit.framework.JUnit4TestAdapter.
The task does not only format the output. It generates all the report itself using the mentioned formatter. A couple of posts that may be helpful to extend the formatter:
How do I configure JUnit Ant task to only produce output on failures?
Custom JUnit Report?
A proposed solution is to write a custom formatter class extending org.apache.tools.ant.taskdefs.optional.junit.XMLJUnitResultFormatter and provide it in the classname attribute of the formatter element.

Sample TFS 2010 Build Process Template for NCover [duplicate]

I was wondering if any of you guys had any experience generating code coverage reports in TFS Build Server 2010 while running NUnit tests.
I know it can be easily done with the packaged alternative (MSTest + enabling coverage on the testrunconfig file), but things are a little more involved when using NUnit. I've found some info here and there pointing to NCover, but it seems outdated. I wonder if there are other alternatives and whether someone has actually implemented this or not.
Here's more info about our environment/needs:
- TFS Build Server 2010
- Tests are in plain class libraries (not Test libraries - i.e., no testrunconfig files associated), and are implemented in NUnit. We have no MSTests.
- We are interested in running coverage reports as part of each build and if possible setting coverage threshold requirements for pass/fail criteria.
We 've done it with NUnit-NCover and are pretty happy with our results. NUnit execution is followed by NUnitTfs execution in order to get our testing results published in the Build Log. Then NCover kicks in, generating our code coverage results.
One major thing that poses as a disadvantage is fact that setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it.
Two things could pose as disadvantages:
NUnitTfs doesn't work well with NCover (at least I couldn't find a way to execute both in the same step, so (since NCover invokes NUnit) I have to run Unit tests twice: (1) to get the test results and (2) to get coverage results over NCover. Naturally, that makes my builds last longer.
Setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it .
In any case, the resulting reporting (especially the Trend aspect) is very useful in monitoring how our code evolves within time. Especially if you 're working on a Platform (as opposed to short-timed Projects), Trend reports are of great value.
EDIT
I 'll try to present in a quick & dirty manner how I 've implemented this, I hope it can be useful. We currently have NCover 3.4.12 on our build server.
Our simple naming convention regarding our NUnit assemblies is that if we have a production assembly "123.dll", then another assembly named "123_nunit.dll" exists that implements its tests. So, each build has several *_nunit.dll assemblies that are of interest.
The part in the build process template under "If not disable tests" is the one that has been reworked in order to achieve our goals, in particular the section that was named "Run MSTest for Test Assemblies". The whole implementation is here, after some cleanups to make the flow easier to be understood (pic was too large to be directly inserted here).
At first, some additional Arguments are implemented in the Build Process Template & are then available to be set in each build definition:
We then form the NUnit args in "Formulate nunitCommandLine":
String.Format("{0} /xml={1}\\{2}.xml", nunitDLL, TestResultsDirectory, Path.GetFileNameWithoutExtension(nunitDLL))
This is then used in the "Invoke NUnit"
In case this succeeds & we have set coverage for this build we move to "Generate NCover NCCOV" (the coverage file for this particular assembly). For this we invoke NCover.Console.exe with the following as Args:
String.Format("""{0}"" ""{1}"" //w ""{2}"" //x ""{3}\{4}"" //literal //ias {5} //onlywithsource //p ""{6}""",
NUnitPath,
Path.GetFileName(nunitDLL),
Path.GetDirectoryName(nunitDLL),
Path.GetDirectoryName(Path.GetDirectoryName(nunitDLL)),
Path.GetFileName(nunitDLL).Replace("_nunit.dll", ".nccov"),
Path.GetFileNameWithoutExtension(nunitDLL).Replace("_nunit", ""),
BuildDetail.BuildNumber)
All these run in the foreach loop "For all nunit dlls". When we exit the loop, we enter "Final NCover Activities" & at first the part "Merge NCCovs", where NCover.Console.exe is executed again - this time with different args:
String.Format("""{0}\*.nccov"" //s ""{0}\{1}.nccov"" //at ""{2}\{3}\{3}.trend"" //p {1} ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject
)
When this has run, we have reached the point where all NCCOV files of this build are merged into one NCCOV-file named after the build + the Trend file (that monitors the build throughout its life) has been updated with the elements of this current build.
We now have to only generate the final HTML report, this is done in "Generate final NCover rep" where we invoke NCover.reporting with the following args:
String.Format(" ""{0}\{1}.nccov"" //or FullCoverageReport //op ""{2}\{1}_NCoverReport.html"" //p ""{1}"" //at ""{3}\{4}\{4}_{5}.trend"" ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
PathForNCoverResults,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject,
BuildType
)

Setting a Coverage Threshold using Emma and Ant

I'm using Emma in my ant build to perform coverage reporting. For those that have used Emma, is there a way to get the build to fail if the line coverage (or any type of coverage stat) does not meet a particular threshold? e.g. if the line coverage is not 100%
Not out of the box.
However, the report.metrics property or attribute of <report></report> can be set for name, class, method, block, and line. See Coverage Metrics in the Emma reference.
Use a plain-text report then a regexp filter to set up a fail condition.
I wrote an ant task to do this.
You should be able to find all the information you need on my EmmaCheck site.

What profilers and analyzers are there for Erlang/OTP?

Are there any good code profilers/analyzers for Erlang? I need something that can build a call graph (eg gprof) for my code.
For static code analysis you have Xref and Dialyzer, for profiling you can use cprof, fprof or eprof, reference here.
The 'fprof' module includes profiling features. From the fprof module documentation:
fprof:apply(foo, create_file_slow, [junk, 1024]).
fprof:profile().
fprof:analyse().
fprof:apply (or trace) runs the function, profile converts the trace file into something useful, and analyse prints out the summary. This will give you a list of function calls observed, what called them, and what they called, as well as wall-clock timing info.
Try this one: https://github.com/virtan/eep
You could get something like this https://raw.github.com/virtan/eep/master/doc/sshot1.png

Resources