Invoke-Pester -CodeCoverage claims 0% code coverage when testing module function - code-coverage

I wrote a function for dbatools called New-DbaSqlConnectionStringBuilder. I wrote unit tests for it. I know these unit tests cover most of the function. I am getting 0% code coverage report with the following command.
Invoke-Pester .\tests\New-DbaSqlConnectionStringBuilder.Tests.ps1 -CodeCoverage .\functions\New-DbaSqlConnectionStringBuilder.ps1
Abridged output below:
**********************
Running C:\Users\zippy\Documents\dbatools\tests\New-
. . .
Unit tests happen
. . .
Passed: 16 Failed: 0 Skipped: 0 Pending: 0 Inconclusive: 0
Code coverage report:
Covered 0.00% of 21 analyzed commands in 1 file.
To get this version of the code:
git clone https://github.com/zippy1981/dbatools.git
cd dbatools
git checkout testing/PesterCodeCoverage
Import-Module .\dbatools.psd1
What am I doing wrong?

Just psychic debugging:
Your module is installed and your test are running against the module instead of the: ' .\functions\New-DbaSqlConnectionStringBuilder.ps1' file.

Related

gcov generating correct output but gcovr does not

Running through the setup example from gcovr here: https://gcovr.com/en/stable/guide.html#getting-started I can build the file and am seeing the following output from running gcovr -r .:
% gcovr -r .
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
example.cpp 0 0 --%
------------------------------------------------------------------------------
TOTAL 0 0 --%
------------------------------------------------------------------------------
If I run gcov example.cpp directly I can see that the generated .gcov data is correct:
% gcov example.cpp
File 'example.cpp'
Lines executed:87.50% of 8
Creating 'example.cpp.gcov'
I am unsure where the disconnect between this gcov output and the gcovr interpretation of it is.
I have tried downgrading to an older gcovr version, running the command on other projects, and switching python versions, but have not seen any different behavior.
My gcov and gcc are from the Xcode command line tools. gcovr was pip installed (within pyenv with python 3.8.5)
Edit: adding verbose output:
gcovr -r . -v
Filters for --root: (1)
- re.compile('^/Test/')
Filters for --filter: (1)
- DirectoryPrefixFilter(/Test/)
Filters for --exclude: (0)
Filters for --gcov-filter: (1)
- AlwaysMatchFilter()
Filters for --gcov-exclude: (0)
Filters for --exclude-directories: (0)
Scanning directory . for gcda/gcno files...
Found 2 files (and will process 1)
Pool started with 1 threads
Processing file: /Test/example.gcda
Running gcov: 'gcov /Test/example.gcda --branch-counts --branch-probabilities --preserve-paths --object-directory /Test' in '/var/folders/bc/20q4mkss6457skh36yzgm2bw0000gp/T/tmpo4mr2wh4'
Finding source file corresponding to a gcov data file
currdir /Test
gcov_fname /var/folders/bc/20q4mkss6457skh36yzgm2bw0000gp/T/tmpo4mr2wh4/example.cpp.gcov
[' -', ' 0', 'Source', 'example.cpp\n']
source_fname /Test/example.gcda
root /Test
fname /Test/example.cpp
Parsing coverage data for file /Test/example.cpp
Gathered coveraged data for 1 files
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
example.cpp 0 0 --%
------------------------------------------------------------------------------
TOTAL 0 0 --%
------------------------------------------------------------------------------

CppUTest on Jenkins

We use CppUTest to run unit tests.
This is being performed by Cmake/Ninja where after building the tests, we use ninja to execute them ninja test
an example output of this is:
1/3 Test #1: Test1................................................... Passed 0.03 sec
Start 2: Test2
2/3 Test #2: Test2......................................................... Passed 0.00 sec
Start 3: Test3
3/3 Test #3: Test3..............................................................***Exception: SegFault 0.00 sec
66% tests passed, 1 tests failed out of 3
Total Test time (real) = 0.26 sec
The following tests FAILED:
3 - Test3 (SEGFAULT)
Errors while running CTest
FAILED: CMakeFiles/test.util
This is ok if i trigger the build locally on my machine and analyze it manually. Now what i am looking for is an already existing solution to help jenkins analyze the output.
Right now, Jenkins executes the build and exits "successfully", because the command itself ninja test executed successfully, but not all of the tests.
Maybe you already found this but you can create a JUnit output with cpputest with the -ojunit output flag. Jenkins should then be able to import the results from this file.
CppUTest Commandline Switches

Why are Rust documentation tests not executed in Docker when cross-compiling to musl?

My documentation tests are silently not executed in my Docker environment while everything works on both Windows and Ubuntu/Debian hosts.
I created a minimal Github Repository to demonstrate the issue. I tried two different versions of Rust nightly and Rust stable, debug/release, all without success. See my Dockerfile and complete build output.
Example code:
/// Fixes string arrays which can also be objects into string arrays
/// # Examples
///
/// ```
/// assert_eq!(cargo_test_doc_docker::add(1, 2), 3);
/// ```
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
Result when executing on Debian:
arturh#host:~/projects/cargo-test-doc-docker$ cargo test
Compiling cargo-test-doc-docker v0.1.0 (/home/arturh/projects/cargo-test-doc-docker)
Finished test [unoptimized + debuginfo] target(s) in 2.39s
Running target/debug/deps/cargo_test_doc_docker-9d5ae146cd4c3628
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/cargo_test_doc_docker-2a696d2579128ce1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests cargo-test-doc-docker
running 1 test
test src/lib.rs - add (line 4) ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
The problem occurs when executing the build on Docker. This is a minimal Dockerfile that reproduces the problem:
FROM ekidd/rust-musl-builder:nightly-2020-01-26-openssl11 as build
COPY --chown=rust:rust . .
RUN cargo test; echo $?
Result for every Rust toolchain I tried:
Step 6/17 : RUN cargo test; echo $?
---> Running in b266fc72f3c1
Compiling cargo-test-doc-docker v0.1.0 (/home/rust/src)
Finished test [unoptimized + debuginfo] target(s) in 0.32s
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-7b40e7e5b47f49eb
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-0bfec9752a7bec14
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
0
It does not even try to execute any doc tests and exits with zero so it's not easily noticed. I guess it must be something the Docker base image does, but what could that be?
Cross Compilation
This is a logical, if surprising, outcome of cross-compilation.
To understand why, imagine that you:
Compile on a Linux x64 machine (Host).
Target a Windows ARM machine.
The generated code cannot be executed on the current host (Linux x64): it is prepared for a different CPU (instruction set) and OS (system calls).
Since the tests -- unit tests, integration tests, and documentation tests -- are also generated for the target architecture, they cannot be executed on the host either.
What to do with the tests?
If your code has no specific dependency on a specific platform, then you can content yourself with compiling for the host and running those.
Otherwise, you will need access to a machine that can actually run the cross-compiled binaries. You can still use cross-compilation to speed up building those binaries, and then upload them to either a physical or virtual machine to run them.
AFAIK Cargo does not help with the latter, so you'll need your own scripts.
Shepmaster was right, when I target the x86_64-unknown-linux-musl it also does not work locally on Debian:
arturh#host:~/projects/cargo-test-doc-docker$ cargo test --target=x86_64-unknown-linux-musl; echo $?
Compiling cargo-test-doc-docker v0.1.0 (/home/arturh/projects/cargo-test-doc-docker)
Finished test [unoptimized + debuginfo] target(s) in 0.28s
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-8dfff5631875d404
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/x86_64-unknown-linux-musl/debug/deps/cargo_test_doc_docker-eb877250b708174b
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
0
I guess I must have a separate build step for testing the doc tests with the target x86_64-unknown-linux-gnu.

In Jenkins job, behave tests stops after any failure

I have created a jenkins "freestyle" job, in which I am trying to run multiple BDD testing process. Following is the "commands" I have put in "Jenins/Build/execute shell" section:
cd ~/FEXT_BETA_BDD
rm -rf allure_reports allure-reports allure-results
pip install behave
pip install selenium
pip install -r features/requirements.txt
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature
What I have found is in Jenkins, if there is any test case intermittent failure, such message is shown in the Console Output:
"
...
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
3 steps passed, 1 failed, 1 skipped, 0 undefined
Took 2m48.770s
Build step 'Execute shell' marked build as failure
"
And the leftover test cases are skipped. But if I was to run the behave command on my local host directly, I don't get this type of behaviour. The failure will be detected and the remaining test cases continues till all are finished.
So How may I work around this issue in Jenkins ?
Thanks,
Jack
You may try the following syntax:
set +e
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature || echo 'ALERT: Build failed while running the plan section'
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature || echo 'ALERT: Build failed while running the blueprint section'
# Restoring original configuration
set -e
Note:
Goal of set -e is to cause the shell to abort any time an error occurs. If you will see your log output, you will notice sh -xe at the start of execution which confirms that Execute Shell in Jenkins uses -e option. So, to disable it, you can use +e instead. However, it's good to restore it once your purpose is fulfilled so that subsequent commands produce expected result.
Ref: https://superuser.com/questions/1113014/what-would-set-e-and-set-x-commands-do-in-the-context-of-a-shell-script
The ConsoleOutput from the SummaryReporter above indicates that you have only one feature with one scenario (that fails). Behave has no such thing that it stops when the first scenario fails.
An early abortion of the test run can only occur if critical things happen:
A failure/exception in the before_all() hook occurs
A critical exception is raised (SystemExit, KeyboardInterrupt) to end the test run
Your implementation tells behave to abort the test run (make sense on critical failures when all other tests will also fail; why waste the time)
BUT: If the test run is aborted early, all the features/scenarios that are not executed yet are reported as untested counts in the SummaryReporter.
...
0 features passed, 1 failed, 0 skipped, 2 untested
0 scenarios passed, 1 failed, 0 skipped, 3 untested
0 steps passed, 1 failed, 0 skipped, 0 undefined, 6 untested
HINT: Untested counts are normally hidden. They are only shown if the counter is not zero (greater than zero).
This is not the case in your description.
SEE ALSO:
behave: features/runner.abort_by_user.feature

YOCTO - First build for BBB

I am trying to use for the first time the Yocto tool for my BeagleBoneBlack.
First I run this bash file to install Yocto:
#!/bin/bash
WKDIR=/work
mkdir -p $WKDIR/beaglebone-black/yocto/sources
mkdir -p $WKDIR/beaglebone-black/yocto/builds
cd $WKDIR/beaglebone-black/yocto/sources
git clone -b morty git://git.yoctoproject.org/poky.git poky-morty
cd $WKDIR/beaglebone-black/yocto/
source sources/poky-morty/oe-init-build-env builds/build-bbb-morty
Then I edited the file local.conf at "build-bbb-morty/conf" diretory:
MACHINE ?= "beaglebone"
and added
DL_DIR ?= "${TOPDIR}/../dl"
IMAGE_INSTALL_append = " kernel-modules kernel-devicetree"
Then I run bitbake:> bitbake core-image-minimal
After about 8 hours in my Core i7 five generation I got this result at my terminal output and I have no idea what I need to do to fix it:
bitbake core-image-minimal
Parsing recipes: 100% |########################################################################################################| Time: 0:02:55
Parsing of 864 .bb files complete (0 cached, 864 parsed). 1318 targets, 67 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.32.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "Ubuntu-16.04"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "beaglebone"
DISTRO = "poky"
DISTRO_VERSION = "2.2.1"
TUNE_FEATURES = "arm armv7a vfp neon callconvention-hard cortexa8"
TARGET_FPU = "hard"
meta
meta-poky
meta-yocto-bsp = "morty:a3fa5ce87619e81d7acfa43340dd18d8f2b2d7dc"
NOTE: Fetching uninative binary shim from http ://downloads.yoctoproject.org/releases/uninative/1.4/x86_64-nativesdk-libc.tar.bz2;sha256sum=101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca
--2017-01-18 15:51:09-- http ://downloads.yoctoproject.org/releases/uninative/1.4/x86_64-nativesdk-libc.tar.bz2
Resolving downloads.yoctoproject.org (downloads.yoctoproject.org)... 198.145.20.127
Connecting to downloads.yoctoproject.org (downloads.yoctoproject.org)|198.145.20.127|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2473216 (2.4M) [application/octet-stream]
Saving to: ‘/work/beaglebone-black/yocto/builds/build-bbb-morty/../dl/uninative/101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca/x86_64-nativesdk-libc.tar.bz2’
2017-01-18 15:51:18 (297 KB/s) - ‘/work/beaglebone-black/yocto/builds/build-bbb-morty/../dl/uninative/101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca/x86_64-nativesdk-libc.tar.bz2’ saved [2473216/2473216]
Initialising tasks: 100% |#####################################################################################################| Time: 0:00:14
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
WARNING: attr-native-2.4.47-r0 do_fetch: Failed to fetch URL http ://download.savannah.gnu.org/releases/attr/attr-2.4.47.src.tar.gz, attempting MIRRORS if available
WARNING: libpng-native-1.6.24-r0 do_fetch: Failed to fetch URL http ://distfiles.gentoo.org/distfiles/libpng-1.6.24.tar.xz, attempting MIRRORS if available
ERROR: core-image-minimal-1.0-r0 do_image_wic: Function failed: do_image_wic (log file is located at /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788)
ERROR: Logfile of failure stored in: /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788
Log data follows:
| DEBUG: Executing python function set_image_size
| DEBUG: Python function set_image_size finished
| DEBUG: Executing shell function do_image_wic
| Checking basic build environment...
| Done.
|
| Build artifacts not found, exiting.<br/>
| (Please check that the build artifacts for the machine
| selected in local.conf actually exist and that they
| are the correct artifacts for the image (.wks file))
|
| The artifact that couldn't be found was kernel-dir:
| /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/deploy/images/beaglebone
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_image_wic (log file is located at /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788)
ERROR: Task (/work/beaglebone-black/yocto/sources/poky-morty/meta/recipes-core/images/core-image-minimal.bb:do_image_wic) failed with exit code '1'
NOTE: Tasks Summary: Attempted 1771 tasks of which 6 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/work/beaglebone-black/yocto/sources/poky-morty/meta/recipes-core/images/core-image-minimal.bb:do_image_wic
Summary: There were 2 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
While not sure this could be the reason of the problem, the prefered method to add packages to the image, in the local.conf context is using the CORE_IMAGE_EXTRA_INSTALL variable.
Therefore change:
IMAGE_INSTALL_append = " kernel-modules kernel-devicetree"
to
CORE_IMAGE_EXTRA_INSTALL += "kernel-modules kernel-devicetree"
I think there is no problem with your work method.
It seems to be a build environment problem, but the error log seems to confirm.
your log location at "/work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788"
Your error log indicates the the URL for fetching binaries failed.
You can try using tunnel through proxy. Or you can run the bitbake again because it can also fail sometimes due to network conditions.

Resources