How do I get code coverage for all branches using Pester? - code-coverage

Multiple test cases have been written to test a new Chocolatey function using Pester. How do check whether all branches have been covered?

The current version of Pester (3.0) does support code coverage.
Simply use
Invoke-Pester -CodeCoverage *.ps1
to get a full rapport of code coverage (coverage %) and a summary of all the code lines (branches) not executed during testing:
Tests completed in 10.11s
Passed: 66 Failed: 0 Skipped: 0 Pending: 0
Code coverage report:
Covered 99,20 % of 501 analyzed commands in 22 files.
Missed commands:
File Function Line Command
---- -------- ---- -------
Set-ProgressColor.ps1 Set-ProgressColor 19 Write-Verbose "Progress colors are only supported on the PowerShell com...

UPDATE 2:
Thanks to oɔɯǝɹ for pointing out that Pester has now released a version of Pester that supporting code coverage.
UPDATE 1:
As of Pester Version 3.0, it is now possible to get code coverage information, using:
Invoke-Pester -CodeCoverage <path to file>
This is documented in the project wiki page:
https://github.com/pester/Pester/wiki/Code-Coverage
NOTE: In order to use this, you will require PowerShell version 3.0
ORIGINAL ANSWER:
To the best of my knowledge, Pester doesn't currently support code coverage analysis, but it is something that is being worked on.
There is an open issue for this feature here:
https://github.com/pester/Pester/issues/53
You can see it being worked on here:
http://davewyatt.wordpress.com/2014/06/29/code-coverage-analysis-for-pester-feedback-request/
And there is a screenshot of it working here:
https://twitter.com/nohwnd/status/485093995929157632
So basically, hold tight, and there will hopefully be something soon.
In terms of the actual Chocolatey code base, there is quite a good convention being used, namely that for each *.ps1 file, there "should" be a corresponding ".Tests.ps1 file. If this second file doesn't exist, then there are no unit tests for that function.

Related

Jenkins PyLint Warnings tool parses log files but reports 'found 0 issues'

I have setup Jenkins to run pylint on all python source files and all the log files are generated (apparently correctly) into a sub-directory as follows:
Source\pylint_logs\pylint1.log, pylint2.log, ..., pylint75.log
I have included a --msg-template definition based on the instructions on my Jenkins Configure page: Post-build Actions->Record compiler warnings and static analysis results->Static Analysis Tools. The template is shown as:
msg-template={path}:{line}: [{msg_id}, {obj}] {msg} ({symbol})
An example of one of the log files being generated by Jenkins/pylint is as follows:
************* Module FigureView
myapp\Views\FigureView.py:1: [C0103, ] Module name "FigureView" doesn't conform to snake_case naming style (invalid-name)
myapp\Views\FigureView.py:30: [C0103, FigureView.__init__] Attribute name "ax" doesn't conform to snake_case naming style (invalid-name)
------------------------------------------------------------------
Your code has been rated at 8.57/10 (previous run: 8.57/10, +0.00)
For the PyLint Report File Pattern, I have: Source/pylint_logs/pylint*.log
It appears that PyLint Warnings is parsing the files because the console output looks like this:
[PyLint] Searching for all files in 'D:\Jenkins\workspace\PROJECT' that match the pattern 'Source/pylint_logs/pylint*.log'
[PyLint] -> found 75 files
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint1.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint10.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
This repeats for all 75 files, even though there are plenty of issues in the log files.
What is odd, is that when I was first prototyping the use of Jenkins on this project, I set it up to just run pylint on a single file. I ran across another StackOverflow post that showed a msg-template that allowed me to get it working (unable to get pylint output to populate the violations graph). I even got the graph to show up for the PyLint Warnings Trend. I used the following definition per the post:
msg-template={path}:{line}: [{msg_id}({symbol}), {obj}] {msg}
Note that this format is slightly different from the one recommended by my Jenkins page (shown earlier). Even though this worked for a single file, neither template now seems to work for multiple files, or else there is something other than the template causing the problem. My graph has flat-lined, and I always get 0 issues reported.
I have had trouble finding useful documentation on the Jenkins PyLint Warnings tool. Does anyone have any ideas or pointers to documentation I can research further? Thanks much!
Ensure pass output-format parameter in pylint command. Example:
pylint --exit-zero --output-format=parseable module1 module2 > pylint.report
you have to set the Pylint's option --message-template in .pylintrc as
msg-template={path}: {line}: [{msg_id} ({symbol}), {obj}] {msg}
output-format=text

How to generate documentation of robot file using sphinx?

I have a robot file (calc_check.robot) in which each test case has separate documentation.
*** Settings ***
Documentation
... The test cases are designed to test the calculator .
Library ../../Library/AddNumbers
*** Test Cases ***
Calc_check_test Testcase01_a
[Documentation]
... Verify that two numbers are added or not
[Tags] add calculator
${addition}= Add numbers 10 20
Calc_check_test Testcase01_b
[Documentation]
... Verify that two numbers are added or not with negative sign
[Tags] add calculator
${addition}= Add numbers 10 -20
When i try to generate documentation for that robot file using the rst file (call_check.rst) i'm getting complete test case along with documentation as well, but i need only "[Documentation]" part only.
calc_check
======================================
.. robot-settings::
:source:/Users/sphinx/calc_check.robot
.. robot-tests::
:source:/Users/sphinx/calc_check.robot
I want documentation (i.e., only [Documentation] part of test case) from two test cases excluding the test case code.
Please tell me how to generate only the documentation part of it.
Robot provides documentation generation libraries called libdoc:
https://robot-framework.readthedocs.io/en/2.9.2/_modules/robot/libdoc.html
Problem is that it generates only for libraries and resources files (those without ***Testcase*** part).
If you need to generate docs from test suites, I would recommand to temporary change TestSuite into Resource file (change section to Keywords) and run libdoc for such file:
python -m robot.libdoc <path to res/lib> <list/show>

Sonar not using lcov file

I have a Jenkins job that is using the "Invoke standalone Sonar analysis" for a javascript project.
I thought it was working fine with the following parameters:
sonar.sources=src
sonar.language=js
sonar.dynamicAnalysis=reuseReports
sonar.javascript.jstestdriver.coveragefile=target/test-coverage/jscover.lcov
sonar.javascript.lcov.reportPath=target/test-coverage/jscover.lcov
But then I noticed that the numbers that are being reported in Sonar do not match the number in the lcov file.
When I log into to Sonar I see the code coverage number as 30%.
But when I examine the lcov file, I get completely different numbers:
$lcov --summary target/test-coverage/jscover.lcov
...
lines......: 48.1%
functions..: 41.7%
branches...: no data found
And in fact, when I view the jscover.html report file, I see the total coverage at 48%.
Sonar reports it at 30%.
And drilling down into the individual files, Sonar's results do not match the results in the lcov file either.
For instance:
Just by looking at a particular file, /src/js/models/Call.js, lcov says it’s at 97% code coverage.
But Sonar displays this:
49.0% by unit tests Line coverage:97.0% (97/100)Branch coverage:0.0% (0/98)
It’s as if Sonar is using the Branch Coverage AND the Line Coverage Stats to get the final code coverage results at 49.0%.
Do you know what I am doing wrong? Do you know why Sonar is not using the coverage results from the lcov file? Is it because the Branch Coverage has no data?
Thanks for any insight on this.
Code coverage is recomputed by SonarQube. SonarQube just retrieves from the report whether a line is covered or not by unit tests. Example:
DA:10,0 => it means that line 10 is not covered
DA:20,1 => it means that line 20 is covered
DA:30,5 => it means that line 30 is covered
Then SonarQube recomputes the code coverage:
Number of covered lines / (Number of covered lines + Number of uncovered lines)

Test with undefined steps not flagged as a failed test

I am facing the issue of a test that has undefined step(s) not being flagged as a failed test.
In the Java code we use Selenium 2/WebDriver and tests are driven by Ant and run in a Continuous Integration environment.
For the following scenario:
#test1
Scenario: Run test with an undefined step
Given I am logged in to the application //working
And I view the test example //working
Then the tree panel exists in the layout //undefined step
The following is a snippet of what is seen in the console:
#test1
Scenario: Run test with an undefined step
Given I am logged in to the application
And I view the test example
Then the tree panel exists in the layout
1 scenario (1 undefined)
3 steps (1 undefined, 2 passed)
The ant target used to run the test:
ant test.cuke.firefox -Dwebtest.server="http://localhost:9944" -Dwebtest.cuke.options="--tags #test1"|wac
I read that using the --strict flag gets the tests to fail.
But I've no idea of where I need to mention the flag.
Is it in the build.xml file? If so, where exactly - as wherever I've tried hasn't helped.
Is it in the cucumber.yml file?
There are 2 such files:
i) \lib\cucumber.jruby\gems\cucumber-0.8.7
ii) \lib\cucumber.jruby\gems\gherkin-2.1.5-java
If not in these files, where else?
Could you please point to where and how the flag needs to be set?
I've tried looking up the help but nothing has helped (probably I'm looking in all the wrong places!)
Thanks!
You need to set the strict option:
http://cukes.info/api/cucumber/jvm/javadoc/cucumber/api/junit/Cucumber.Options.html#strict()
Edit: You can set this flag in the RunCukesTest like:
#RunWith(Cucumber.class)
#Cucumber.Options(
format = {"html:target/cucumber-html-report"},
strict = true)
public class RunCukesTest {
}

Running F# xUnit Fact from TestDriven.NET reporting "It looks like you're trying to execute an xUnit.net unit test."

I am trying to run xUnit tests (from an F# module, if it makes any difference) using TestDriven.NET, but whatever I do I get this error:
It looks like you're trying to execute an xUnit.net unit test.
For xUnit 1.5 or above (recommended):
Please ensure that the directory containing your 'xunit.dll' reference also contains xUnit's
test runner files ('xunit.dll.tdnet', 'xunit.runner.tdnet.dll' etc.)
For earlier versions:
You need to install support for TestDriven.Net using xUnit's 'xunit.installer.exe' application.
You can find xUnit.net downloads and support here:
http://www.codeplex.com/xunit
I tried following the suggestions, i.e. I copied the files
xunit.dll.tdnet
xunit.extensions.dll
xunit.gui.clr4.exe
xunit.runner.tdnet.dll
xunit.runner.utility.dll
xunit.runner.utility.xml
xunit.xml
to the folder with xunit.dll and I ran xunit.installer.exe. How can I get it to work?
I just figured out that I forgot to make the test a function in F# (so it was just a value). The error message can't be more misleading though!
You have two problems:
your Fact is broken:-
If you hover over the
please work
bit, you'll see something like: unit -> int
For a Fact to be picked up by an xUnit runner, it needs to yield `unit (void).
Hence, one key thing to get right first is to not return anything. In other words, replace your 123 with () (or an Assertion).
You can guard against this by putting a :unit stipulation on the test:-
[<Fact>]
let ``please work`` () : unit = 123
This will force a compilation error.
TestDriven.NET is reporting it cannot find the xunit.tdnet modules
It's critical to get step 1 right first. Then retry and the problem should be gone
If it remains...
Either try the VS-based runner which should work as long as it's installed and xunit.dll is getting to your output dir or look at the docs for your version of TD.NET for detailed troubleshooting notes (exec summary is if the .tdnet file was in your out dir or you undo and redo the xunit.installer from the folder containing the packages it should just work, esp if you are on latest)

Resources