PITest skips tests in ant - ant

I have the problem, that PITest skips some of my tests. It reports no line coverage for the tested class and all mutations in stay alive:
replaced return value with Collections.emptyList for ... → NO_COVERAGE
The list below is empty:
Tests examined
I did check the following:
the class file of class and test is supported in one of the directories in the classpath
the test is not excluded
tests in the same directory are run
source directory is provided correctly
no errors occur on the console, even when logging with verbose=true
I logged and checked all parameters
<pitest
pitClasspath="pit.path"
classPath="mutation.path"
targetClasses="my.package1.*,my.package2.*,my.package3.*"
targetTests="my.package1.Class1Test,my.package2.sub.Class2Test,my.package3.sub.Class3Test"
reportDir="pitest"
sourceDir="src,testsrc"
timestampedReports="false"
outputFormats="HTML,XML"
excludedClasses="my.package1.*Test,my.package2.*Test,my.package3.*Test"
verbose="true"
/>
I am using PITest version 1.4.11, but I tested as well with 1.6.2 and 1.4.3. Also I'm using Java 1.8.0_211 and ant 1.9.13.
I set up a test project at:
https://github.com/johannesn/pittestskippingtests
This is the log output for this sample project:
Buildfile: .../pitestskipingtests/build.xml
mutationCoverage:
[delete] Deleting directory .../pitestskipingtests/classes
[delete] Deleting directory .../pitestskipingtests/testclasses
[mkdir] Created dir: .../pitestskipingtests/classes
[mkdir] Created dir: .../pitestskipingtests/testclasses
[javac] .../pitestskipingtests/build.xml:30: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
[javac] Compiling 3 source files to .../pitestskipingtests/classes
[javac] .../pitestskipingtests/build.xml:31: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
[javac] Compiling 3 source files to .../pitestskipingtests/testclasses
[pitest] 19:24:08 PIT >> INFO : ---------------------------------------------------------------------------
[pitest] 19:24:08 PIT >> INFO : Enabled (+) and disabled (-) features.
[pitest] 19:24:08 PIT >> INFO : -----------------------------------------
[pitest] 19:24:08 PIT >> INFO : +FFBLOCK Filters mutations in code duplicated by finally block inlining
[pitest] 19:24:08 PIT >> INFO : +FSTATI Filters mutations in static initializers and code called only from them
[pitest] 19:24:08 PIT >> INFO : +FSEQUIVEQUALS Filters equivalent mutations that affect only performance in short cutting equals methods
[pitest] 19:24:08 PIT >> INFO : +FFEACH Filters mutations in compiler generated code that implements for each loops
[pitest] 19:24:08 PIT >> INFO : +FINFINC Filters mutations to increments that may cause infinite loops
[pitest] 19:24:08 PIT >> INFO : +FFLOOP Filters any mutations to increments in for loops as they may cause timeouts
[pitest] 19:24:08 PIT >> INFO : +FRETEQUIV Filters return vals mutants with bytecode equivalent to the unmutated class
[pitest] 19:24:08 PIT >> INFO : +FINULL Filters mutations in compiler generated code that checks for null by calling getClass
[pitest] 19:24:08 PIT >> INFO : +FTRYWR Filters mutations in code generated for try with resources statements
[pitest] 19:24:08 PIT >> INFO : +FKOTLIN Filters out junk mutations in bytecode created by compiler for kotlin language features
[pitest] 19:24:08 PIT >> INFO : +FSTATINIT Filters mutations in static initializers and code called only from them
[pitest] 19:24:08 PIT >> INFO : +FLOGCALL Filters mutations in code that makes calls to logging frameworks
[pitest] 19:24:08 PIT >> INFO : +FINFIT Filters mutations that may cause infinite loops by removing calls to iterator.next
[pitest] 19:24:08 PIT >> INFO : +FANN Filters mutations in classes and methods with matching annotations of class or runtime retention
[pitest] 19:24:08 PIT >> INFO : [annotation] Annotation to avoid (full package name not required)
[pitest] 19:24:08 PIT >> INFO : -CLASSLIMIT Limits the maximum number of mutations per class
[pitest] 19:24:08 PIT >> INFO : [limit] Integer value for maximum mutations to create per class
[pitest] 19:24:08 PIT >> INFO : -EXPORT Exports mutants bytecode and other details to disk
[pitest] 19:24:08 PIT >> INFO : ---------------------------------------------------------------------------
[pitest] 19:24:08 PIT >> FINE : Running report with ReportOptions [targetClasses=[my.package1.*, my.package2.*, my.package3.*], excludedMethods=[], excludedClasses=[my.package1.*Test, my.package2.*Test, my.package3.*Test], excludedTestClasses=[], codePaths=[], reportDir=pitest, historyInputLocation=null, historyOutputLocation=null, sourceDirs=[src, testsrc], classPathElements=[.../pitestskipingtests/resources/pitest/pitest-1.4.3.jar, .../pitestskipingtests/testclasses, .../pitestskipingtests/classes, .../pitestskipingtests/lib/junit-4.12.jar], mutators=[], features=[], dependencyAnalysisMaxDistance=-1, jvmArgs=[], numberOfThreads=1, timeoutFactor=1.25, timeoutConstant=4000, targetTests=[^my\.package1\.Class1Test$, ^my\.package2\.sub\.Class2Test$, ^my\.package3\.sub\.Class3Test$], loggingClasses=[], maxMutationsPerClass=0, verbose=true, failWhenNoMutations=true, outputs=[HTML, XML], groupConfig=TestGroupConfig [excludedGroups=[], includedGroups=[]], fullMutationMatrix=false, mutationUnitSize=0, shouldCreateTimestampedReports=false, detectInlinedCode=false, exportLineCoverage=false, mutationThreshold=0, coverageThreshold=0, mutationEngine=gregor, javaExecutable=null, includeLaunchClasspath=false, properties={}, maxSurvivors=-1, excludedRunners=[], includedTestMethods=[], testPlugin=junit, useClasspathJar=false]
[pitest] 19:24:08 PIT >> FINE : System class path is .../pitestskipingtests/resources/pitest/pitest-1.4.3.jar:.../pitestskipingtests/resources/pitest/pitest-ant-1.4.3.jar:.../pitestskipingtests/resources/pitest/pitest-entry-1.4.3.jar:.../pitestskipingtests/resources/pitest/xmlpull-1.1.3.1.jar:.../pitestskipingtests/resources/pitest/xstream-1.4.11.1.jar:.../pitestskipingtests/lib/junit-4.12.jar
[pitest] 19:24:08 PIT >> FINE : Maximum available memory is 3641 mb
[pitest] 19:24:08 PIT >> FINE : MINION : Installing PIT agent
[pitest]
[pitest] 19:24:09 PIT >> INFO : Sending 3 test classes to minion
[pitest] 19:24:09 PIT >> INFO : Sent tests to minion
[pitest] 19:24:09 PIT >> FINE : Coverage generator Minion exited ok
[pitest] 19:24:09 PIT >> INFO : Calculated coverage in 0 seconds.
[pitest] 19:24:09 PIT >> FINE : Used memory after coverage calculation 14 mb
[pitest] 19:24:09 PIT >> FINE : Free Memory after coverage calculation 231 mb
[pitest] 19:24:09 PIT >> FINE : According to coverage no tests hit the mutation MutationDetails [id=MutationIdentifier [location=Location [clazz=my.package1.Class1, method=method1, methodDesc=()Ljava/lang/String;], indexes=[4], mutator=org.pitest.mutationtest.engine.gregor.mutators.ReturnValsMutator], filename=Class1.java, block=1, lineNumber=5, description=mutated return of Object value for my/package1/Class1::method1 to ( if (x != null) null else throw new RuntimeException ), testsInOrder=[], isInFinallyBlock=false, poison=NORMAL]
[pitest] 19:24:09 PIT >> FINE : According to coverage no tests hit the mutation MutationDetails [id=MutationIdentifier [location=Location [clazz=my.package2.sub.Class2, method=method1, methodDesc=()Ljava/lang/String;], indexes=[4], mutator=org.pitest.mutationtest.engine.gregor.mutators.ReturnValsMutator], filename=Class2.java, block=1, lineNumber=5, description=mutated return of Object value for my/package2/sub/Class2::method1 to ( if (x != null) null else throw new RuntimeException ), testsInOrder=[], isInFinallyBlock=false, poison=NORMAL]
[pitest] 19:24:09 PIT >> FINE : According to coverage no tests hit the mutation MutationDetails [id=MutationIdentifier [location=Location [clazz=my.package3.sub.Class3, method=method1, methodDesc=()Ljava/lang/String;], indexes=[4], mutator=org.pitest.mutationtest.engine.gregor.mutators.ReturnValsMutator], filename=Class3.java, block=1, lineNumber=6, description=mutated return of Object value for my/package3/sub/Class3::method1 to ( if (x != null) null else throw new RuntimeException ), testsInOrder=[], isInFinallyBlock=false, poison=NORMAL]
[pitest] 19:24:09 PIT >> INFO : Created 3 mutation test units
[pitest] 19:24:09 PIT >> FINE : Used memory before analysis start 23 mb
[pitest] 19:24:09 PIT >> FINE : Free Memory before analysis start 222 mb
[pitest] 19:24:09 PIT >> FINE : Running 3 units
[pitest] ================================================================================
[pitest] - Mutators
[pitest] ================================================================================
[pitest] > org.pitest.mutationtest.engine.gregor.mutators.ReturnValsMutator
[pitest] >> Generated 3 Killed 0 (0%)
[pitest] > KILLED 0 SURVIVED 0 TIMED_OUT 0 NON_VIABLE 0
[pitest] > MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
[pitest] > NO_COVERAGE 3
[pitest] --------------------------------------------------------------------------------
[pitest] ================================================================================
[pitest] - Timings
[pitest] ================================================================================
[pitest] > scan classpath : < 1 second
[pitest] > coverage and dependency analysis : < 1 second
[pitest] > build mutation tests : < 1 second
[pitest] > run mutation analysis : < 1 second
[pitest] --------------------------------------------------------------------------------
[pitest] > Total : < 1 second
[pitest] --------------------------------------------------------------------------------
[pitest] ================================================================================
[pitest] - Statistics
[pitest] ================================================================================
[pitest] >> Generated 3 mutations Killed 0 (0%)
[pitest] >> Ran 0 tests (0 tests per mutation)
[pitest] 19:24:09 PIT >> INFO : Completed in 0 seconds
BUILD SUCCESSFUL
Total time: 1 second

After a hell of a lot of debugging and trying things out, I found out that test methods were not found because the junit dependency hamcrest-core was missing.
For everyone else experiencing a problem like this, try debugging the following method:
org.pitest.junit.RunnerSuiteFinder#apply
Depending on your test setup (if you use junit e.g.) it might be a different subclass of
org.pitest.testapi.TestSuiteFinder
I experienced the following exception:
java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:70)
at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:37)
at org.pitest.junit.adapter.AdaptedJUnitTestUnit.createRunner(AdaptedJUnitTestUnit.java:107)
at org.pitest.junit.RunnerSuiteFinder.apply(RunnerSuiteFinder.java:40)
at org.pitest.junit.RunnerSuiteFinder.apply(RunnerSuiteFinder.java:35)
at org.pitest.extension.common.CompoundTestSuiteFinder.apply(CompoundTestSuiteFinder.java:20)
at org.pitest.extension.common.CompoundTestSuiteFinder.apply(CompoundTestSuiteFinder.java:9)
at org.pitest.testapi.execute.FindTestUnits.findTestUnits(FindTestUnits.java:47)
at org.pitest.testapi.execute.FindTestUnits.getTestUnits(FindTestUnits.java:40)
at org.pitest.testapi.execute.FindTestUnits.findTestUnitsForAllSuppliedClasses(FindTestUnits.java:29)
at org.pitest.coverage.execute.CoverageMinion.discoverTests(CoverageMinion.java:156)
at org.pitest.coverage.execute.CoverageMinion.getTestsFromParent(CoverageMinion.java:137)
at org.pitest.coverage.execute.CoverageMinion.main(CoverageMinion.java:83)
Caused by: java.lang.ClassNotFoundException: org.hamcrest.SelfDescribing
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 26 more
The exception is part of the runner variable. Print it using:
((ErrorReportingRunner) runner).causes.get(0).printStackTrace()

Related

get variable from config_setting in bazel

I have the following config_setting defined:
config_setting(
name = "perception_env",
values = {"perception": "true"},
)
print(perception_env)
However, I can't seem to print the variable, it says it doesn't exist.
config_setting is only used for selecting the different possible values in a select(). A config_setting doesn't really have a value, it's more an association of a variable (a Bazel flag, a Starlark-defined flag, platform constraints) and its value. The values attribute is basically for flag values ("perception" would have to be a bazel flag).
For example,
config_setting(
name = "my_config_setting_opt",
values = {"compilation_mode": "opt"}
)
config_setting(
name = "config_setting_dbg",
values = {"compilation_mode": "dbg"}
)
config_setting(
name = "config_setting_fastbuild",
values = {"compilation_mode": "fastbuild"}
)
genrule(
name = "gen_out",
outs = ["out"],
cmd = select({
":my_config_setting_opt": "echo Opt mode > $#",
":config_setting_dbg": "echo Dbg mode > $#",
":config_setting_fastbuild": "echo Fastbuild mode > $#",
}),
)
The 3 config_settings declare 3 different associations of the --compilation_mode flag, one for each of its possible values (see https://bazel.build/docs/user-manual#compilation-mode)
Then the select() declares 3 different possible values for the cmd attribute of the genrule gen_out. Then setting the --compilation_mode flag to different values changes which value for cmd is selected:
$ bazel build out --compilation_mode=dbg && cat bazel-bin/out
INFO: Build option --compilation_mode has changed, discarding analysis cache.
INFO: Analyzed target //:out (0 packages loaded, 11 targets configured).
INFO: Found 1 target...
Target //:out up-to-date:
bazel-bin/out
INFO: Elapsed time: 0.145s, Critical Path: 0.01s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
INFO: Build completed successfully, 2 total actions
Dbg mode
$ bazel build out --compilation_mode=opt && cat bazel-bin/out
INFO: Build option --compilation_mode has changed, discarding analysis cache.
INFO: Analyzed target //:out (0 packages loaded, 11 targets configured).
INFO: Found 1 target...
Target //:out up-to-date:
bazel-bin/out
INFO: Elapsed time: 0.111s, Critical Path: 0.01s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
INFO: Build completed successfully, 2 total actions
Opt mode
$ bazel build out --compilation_mode=fastbuild && cat bazel-bin/out
INFO: Build option --compilation_mode has changed, discarding analysis cache.
INFO: Analyzed target //:out (0 packages loaded, 11 targets configured).
INFO: Found 1 target...
Target //:out up-to-date:
bazel-bin/out
INFO: Elapsed time: 0.145s, Critical Path: 0.01s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
Fastbuild mode

gcov generating correct output but gcovr does not

Running through the setup example from gcovr here: https://gcovr.com/en/stable/guide.html#getting-started I can build the file and am seeing the following output from running gcovr -r .:
% gcovr -r .
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
example.cpp 0 0 --%
------------------------------------------------------------------------------
TOTAL 0 0 --%
------------------------------------------------------------------------------
If I run gcov example.cpp directly I can see that the generated .gcov data is correct:
% gcov example.cpp
File 'example.cpp'
Lines executed:87.50% of 8
Creating 'example.cpp.gcov'
I am unsure where the disconnect between this gcov output and the gcovr interpretation of it is.
I have tried downgrading to an older gcovr version, running the command on other projects, and switching python versions, but have not seen any different behavior.
My gcov and gcc are from the Xcode command line tools. gcovr was pip installed (within pyenv with python 3.8.5)
Edit: adding verbose output:
gcovr -r . -v
Filters for --root: (1)
- re.compile('^/Test/')
Filters for --filter: (1)
- DirectoryPrefixFilter(/Test/)
Filters for --exclude: (0)
Filters for --gcov-filter: (1)
- AlwaysMatchFilter()
Filters for --gcov-exclude: (0)
Filters for --exclude-directories: (0)
Scanning directory . for gcda/gcno files...
Found 2 files (and will process 1)
Pool started with 1 threads
Processing file: /Test/example.gcda
Running gcov: 'gcov /Test/example.gcda --branch-counts --branch-probabilities --preserve-paths --object-directory /Test' in '/var/folders/bc/20q4mkss6457skh36yzgm2bw0000gp/T/tmpo4mr2wh4'
Finding source file corresponding to a gcov data file
currdir /Test
gcov_fname /var/folders/bc/20q4mkss6457skh36yzgm2bw0000gp/T/tmpo4mr2wh4/example.cpp.gcov
[' -', ' 0', 'Source', 'example.cpp\n']
source_fname /Test/example.gcda
root /Test
fname /Test/example.cpp
Parsing coverage data for file /Test/example.cpp
Gathered coveraged data for 1 files
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
example.cpp 0 0 --%
------------------------------------------------------------------------------
TOTAL 0 0 --%
------------------------------------------------------------------------------

Bazel: why lint failed for variable reference?

I had this genrule in BUILD file, but bazel build failed for the error of:
in cmd attribute of genrule rule //example:create_version_pom: $(BUILD_TAG) not defined
genrule(
name = "create_version_pom",
srcs = ["pom_template.xml"],
outs = ["version_pom.xml"],
cmd = "sed 's/BUILD_TAG/$(BUILD_TAG)/g' $< > $#",
)
What's the reason, and how to fix it please?
The cmd attribute of genrule will do variable expansion for Bazel build variables before the command is executed. The $< and $# variables for the input file and the output file are some of the pre-defined variables. Variables can be defined with --define, e.g.:
$ cat BUILD
genrule(
name = "genfoo",
outs = ["foo"],
cmd = "echo $(bar) > $#",
)
$ bazel build foo --define=bar=123
INFO: Analyzed target //:foo (5 packages loaded, 8 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.310s, Critical Path: 0.01s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
INFO: Build completed successfully, 2 total actions
$ cat bazel-bin/foo
123
So to have $(BUILD_TAG) work in the genrule, you'll want to pass
--define=BUILD_TAG=the_build_tag
Unless it's that you want BUILD_TAG replaced with literally $(BUILD_TAG), in which case the $ needs to be escaped with another $: $$(BUILD_TAG).
See
https://docs.bazel.build/versions/main/be/general.html#genrule.cmd
https://docs.bazel.build/versions/main/be/make-variables.html
Note that Bazel also has a mechanism for "build stamping" for bringing information like build time and version numbers into the build:
https://docs.bazel.build/versions/main/user-manual.html#workspace_status
https://docs.bazel.build/versions/main/command-line-reference.html#flag--embed_label
Using --workspace_status_command and --embed_label are a little more complicated though.

Robocopy fails when is used from TFS Builds

I set a Command Line phase in a TFS Builds to execute a Robocopy and it returns an error code 1, although there are no errors during the robocopy execution.
If I run the Robocopy command directly in the Cmd it works, and the Job log shows that the Robocopy works porperly until the end:
2019-02-27T10:21:58.3234459Z Total Copied Skipped
Mismatch FAILED Extras
2019-02-27T10:21:58.3234459Z Dirs : 1688 0 1688 0 0 0
2019-02-27T10:21:58.3234459Z Files : 6107 6 6101 0 0 0
2019-02-27T10:21:58.3234459Z Bytes : 246.01 m 299.2 k 245.71 m 0 0 0
2019-02-27T10:21:58.3234459Z Times : 0:00:17 0:00:00 0:00:00 0:00:17
2019-02-27T10:21:58.3234459Z
2019-02-27T10:21:58.3234459Z
2019-02-27T10:21:58.3234459Z Speed : 3879329 Bytes/sec.
2019-02-27T10:21:58.3234459Z Speed : 221.976 MegaBytes/min.
2019-02-27T10:21:58.3234459Z
2019-02-27T10:21:58.3234459Z Ended : Wed Feb 27 11:21:58 2019
2019-02-27T10:21:58.3702460Z ##[error]Process completed with exit code 1.
Here is an image about the Build configuration:
RoboCopy has ExitCodes > 0.
In your example Exit Code = 1 means One or more files were copied successfully (that is, new files have arrived).
To fix this you could create a Powershell Script, which executes the copy and overwrites the Exit code.
like
param( [String] $sourcesDirectory, [String] $destinationDirectory, [String] $attributes)
robocopy $sourcesDirectory $destinationDirectory $attributes
if( $LASTEXITCODE -ge 8 )
{
throw ("An error occured while copying. [RoboCopyCode: $($LASTEXITCODE)]")
}
else
{
$global:LASTEXITCODE = 0;
}
exit 0
robocopy use the error code different, error code 1 is not a real error, it just saying that one or more files were copied successfully.
TFS recognize error code 1 as a real error and fail the build.
To solve that you need to change the robocopy error code:
(robocopy c:\dirA c:\dirB *.*) ^& IF %ERRORLEVEL% LEQ 1 exit 0
The ^& IF %ERRORLEVEL% LEQ 1 exit 0 convert the error code 1 to 0 and then the TFS build will not be failed.

Invoke-Pester -CodeCoverage claims 0% code coverage when testing module function

I wrote a function for dbatools called New-DbaSqlConnectionStringBuilder. I wrote unit tests for it. I know these unit tests cover most of the function. I am getting 0% code coverage report with the following command.
Invoke-Pester .\tests\New-DbaSqlConnectionStringBuilder.Tests.ps1 -CodeCoverage .\functions\New-DbaSqlConnectionStringBuilder.ps1
Abridged output below:
**********************
Running C:\Users\zippy\Documents\dbatools\tests\New-
. . .
Unit tests happen
. . .
Passed: 16 Failed: 0 Skipped: 0 Pending: 0 Inconclusive: 0
Code coverage report:
Covered 0.00% of 21 analyzed commands in 1 file.
To get this version of the code:
git clone https://github.com/zippy1981/dbatools.git
cd dbatools
git checkout testing/PesterCodeCoverage
Import-Module .\dbatools.psd1
What am I doing wrong?
Just psychic debugging:
Your module is installed and your test are running against the module instead of the: ' .\functions\New-DbaSqlConnectionStringBuilder.ps1' file.

Resources