A difference between statement and decision coverage - code-coverage

Statement coverage is said to make sure that every statement in the code is executed at least once.
Decision/branch coverage is said to test that each branch/output of a decisions is tested, i.e. all statements in both false/true branches will be executed.
But is it not the same? In Statement coverage I need to execute all statements so I guess it can be only done by running all possible ways. I know I am missing something here..

The answer by Paul isn't quite right, at least I think so (according to ISTQB's definitions). There's quite significant difference between statement, decision/branch and condition coverage.
I'll use the sample from the other answer but modified a bit, so I can show all three test coverage examples. Tests written here gives 100% test coverage for each type.
if(a || b)) {
test1 = true;
}
else {
if(c) {
test2 = true
}
}
We have two statements here - if(a||b) and if(c), to fully explain those coverage differences:
statement coverage have to test each statement at least once, so we need just two tests:
a=true b=false - that gives us path if(a||b) true -> test1 = true
a=false, b=false and c=true - that gives us path: if(a||b) false -> else -> if(c) -> test2=true.
This way we executed each and every statement.
branch/decision coverage needs one more test:
a=false, b=false, c=false - that leads us to that second if but we are executing false branch from that statement, that wasn't executed in statement coverage
That way we have all the branches tested, meaning we went through all the paths.
condition coverage needs another test:
a=false, b=true - that leads through the same path as first test but executes the other decision in OR statement (a||b) to go through it.
That way we have all conditions tested, meaning that we went through all paths (branches) and triggered it with each condition we could - first 'if' statement was true in first test because of a=true triggered it and in the last test because b=true triggered it. Of course someone can argue that case with a=true and b=true should be tested as well, but when we will check how 'or' works then we can see it isn't needed and also variable c can be of any value as in those tests it is not evaluated.
At least I interpreted it this way. If someone is still interested :)
EDIT: In most sources I found lately decision/branch coverage terms are equivalent and the term I described as decision coverage is in fact condition coverage hence that update of the answer.

If the tests have complete branch coverage then we can say it also has complete statement coverage, but not the vice versa.
100% branch coverage => 100% statement coverage
100% statement coverage does not imply 100% branch coverage
the reason is in branch coverage apart from executing all the statements, we should also verify if the tests executes all branches, which can be interpreted as covering all edges in the control flow branch
if(a){
if(b){
bool statement1 = true;
}
}
a = true, b = true will give 100% statement coverage, but not branch coverage
In the branch coverage we need to cover all the edges, which we missed in the statement coverage shown as red lines in the above image

Nice question. The explanation I often use is that an if-statement without an else-branch still has an invisible "empty" else-statement:
Plain statement coverage just insists that all statements that are actually there are really executed.
Branch coverage insists that even invisible else-branches are executed.
Similar situations occur with switch-statements without a default-case, and repeat-until loops. Branch coverage requires that a default-case is executed, and that a repeat-until is executed at least twice.
A code example:
if (passwordEnteredOK()) {
enterSystem();
}
/* Invisible else part
else {
// do nothing
}
*/
With statement coverage you just check that with a correct password you can use the system. With branch coverage you also test that with an incorrect password you will not enter the system.

You may have a statement like:
if(a || b || (c && d && !e)) {
test1 = true;
} else {
test2 = false;
}
If your code coverage says both the test1 and test2 lines are hit then you have statement coverage, but to get full branch coverage you will need to test when a is true, when a is false but b is true, when a and b are false but c and d are true and e is false, etc.
Branch coverage covers every potential combination of branch choices and so is harder to achieve 100% coverage.

Related

Pitest - Test Coverage Report Data

I understand that when outputting the test coverage report from Pitest, you are presented with the following structure:
<block classname='classname' method='methodName' number='1'>
<tests>
<test name='testName1(file1)'/>
<test name='testName2(file2)'/>
</tests>
</block>
However, I am unsure of the meaning of the 'number' attribute. What is the significance of it?
Thanks
The number is the number of the block within the method.
Blocks are chunks of code that execute together. To calculate coverage, pitest creates a probe in each block. If the probe is executed, pitest then knows that all instructions within that block have been executed, unless an exception is thrown.
Most coverage systems place the probe at the end of the block, so in event of an exception the coverage will be underreported. Pitest places them at the beginning of each block. This causes overreporting, but this is desirable for pitest as the coverage data is used to target tests against mutants. It is better to run a test and it not to detect a mutant than not run one which would.
The precise definition of a block has changed between pitest releases. Originally, it roughly corresponded to branches in the code.
int foo(boolean b) {
System.out.println("hello"); // block 1
int i = 2; // block 1
if (b) { // block 1
System.out.println("boo"); // block 2
i = i + 42; // block 2
}
System.out.println("bye"); // block 3
return i; // block 3
}
So the code above would have three blocks as shown in the comments. Later releases have made the blocks smaller so tests can be targeted more accurately when exceptions occur.

Jenkins and its WHEN statement

this is very tough to me to understand how Jenkins works. In general when you read documentation and define pipeline, things go smooth. I understand pipelines, stages, steps, scripts. What I don't understand is declaration vs runtime. Especially when it comes to WHEN declaration and evaluating expression. For example:
What is the form of expression? Should it return something like: return true; or maybe it should be statement like: true
When it gets executed? If I access params.MY_PARAMETER_FROM_INPUT, do WHEN has access to its value picked by user?
Is it possible to switch execution between runtime vs pipeline declaration time?
Can I ask for stage (input with message box) only if given condition within WHEN is meet and if not, then don't ask for it but run stage anyway?
When you use IF from script and when WHEN from stage. Can WHEN be defined else where? Within steps, scripts, pipeline?
For example in a stage I've put when { expression { params.ENV == 'prod' } } input { message "Really?" ok "Yeah!" } but the expression was ignored and the question was always asked (current understanding is that it should skip stage/abort whole pipeline when ENV input param is different than "prod" value)
Any thoughts?
OK. Using existing semantic of Jenkins I have manage to achieve what I wanted with following snippet:
stage('Confirm if production related') {
when {
beforeInput true
expression { params.ENV == 'production'; }
}
input {
message "Should I deploy to PRODUCTION?"
ok "Yes, do it!"
}
steps {
script { _ }
}
}
Not bad but not good either.

2 expect blocks in Spock?

I have the following Spock test, which passes:
def "test"() {
given:
def name = "John"
expect:
name.length() == 4
when:
name = name.concat(name)
then:
name.length() == 8
}
But when I modify the last then block and make it an expect block...
// previous part same
expect:
name.length() == 8
I am getting:
Groovy-Eclipse: Groovy:'expect' is not allowed here; instead, use one of: [and, then]
Is it because multiple expect blocks are not allowed in a single test? If so, is this documented anywhere? There is a similar test here written with given - expect - when - then but it is not clear why a second expect was not used, although what is being asserted is same, just being flipped.
when-expect is simply a syntax error with regard to Spock's specification DSL. The compiler message already tells you how to solve your problem. After when you need then (or and first, if you want to structure your when block into multiple sections). In contrast, expect is a kind of when-then contracted into a single block, because both the stimulus and verifying the response in a condition appear together. Block labels and how to use them is documented here.
Under Specifications as Documentation, you learn more about why you might want to use and and block labels. Under Invocation Order you learn about what you can achieve by using multiple then blocks in contrast to then-and.
You can use multiple expect blocks within one feature, no problem. But you do need to make sure that your when-then (if any) is complete, before you can use another expect.
For example, this is a valid feature:
def "my feature"() {
given: true
and: true
and: true
expect: true
when: true
and: true
then: true
and: true
then: true
expect: true
and: true
cleanup: true
}

Jenkins Pipeline Multiconfiguration Project

Original situation:
I have a job in Jenkins that is running an ant script. I easily managed to test this ant script on more then one software version using a "Multi-configuration project".
This type of project is really cool because it allows me to specify all the versions of the two software that I need (in my case Java and Matlab) an it will run my ant script with all the combinations of my parameters.
Those parameters are then used as string to be concatenated in the definition of the location of the executable to be used by my ant.
example: env.MATLAB_EXE=/usr/local/MATLAB/${MATLAB_VERSION}/bin/matlab
This is working perfectly but now I am migrating this scripts to a pipline version of it.
Pipeline migration:
I managed to implement the same script in a pipeline fashion using the Parametrized pipelines pluin. With this I achieve the point in which I can manually select which version of my software is going to be used if I trigger the build manually and I also found a way to execute this periodically selecting the parameter I want at each run.
This solution seems fairly working however is not really satisfying.
My multi-config project had some feature that this does not:
With more then one parameter I can set to interpolate them and execute each combination
The executions are clearly separated and in build history/build details is easy to recognize which settings hads been used
Just adding a new "possible" value to the parameter is going to spawn the desired executions
Request
So I wonder if there is a better solution to my problem that can satisfy also the point above.
Long story short: is there a way to implement a multi-configuration project in jenkins but using the pipeline technology?
I've seen this and similar questions asked a lot lately, so it seemed that it would be a fun exercise to work this out...
A matrix/multi-config job, visualized in code, would really just be a few nested for loops, one for each axis of parameters.
You could build something fairly simple with some hard coded for loops to loop over a few lists. Or you can get more complicated and do some recursive looping so you don't have to hard code the specific loops.
DISCLAIMER: I do ops much more than I write code. I am also very new to groovy, so this can probably be done more cleanly, and there are probably a lot of groovier things that could be done, but this gets the job done, anyway.
With a little work, this matrixBuilder could be wrapped up in a class so you could pass in a task closure and the axis list and get the task map back. Stick it in a shared library and use it anywhere. It should be pretty easy to add some of the other features from the multiconfiguration jobs, such as filters.
This attempt uses a recursive matrixBuilder function to work through any number of parameter axes and build all the combinations. Then it executes them in parallel (obviously depending on node availability).
/*
All the config axes are defined here
Add as many lists of axes in the axisList as you need.
All combinations will be built
*/
def axisList = [
["ubuntu","rhel","windows","osx"], //agents
["jdk6","jdk7","jdk8"], //tools
["banana","apple","orange","pineapple"] //fruit
]
def tasks = [:]
def comboBuilder
def comboEntry = []
def task = {
// builds and returns the task for each combination
/* Map the entries back to a more readable format
the index will correspond to the position of this axis in axisList[] */
def myAgent = it[0]
def myJdk = it[1]
def myFruit = it[2]
return {
// This is where the important work happens for each combination
node(myAgent) {
println "Executing combination ${it.join('-')}"
def javaHome = tool myJdk
println "Node=${env.NODE_NAME}"
println "Java=${javaHome}"
}
//We won't declare a specific agent this part
node {
println "fruit=${myFruit}"
}
}
}
/*
This is where the magic happens
recursively work through the axisList and build all combinations
*/
comboBuilder = { def axes, int level ->
for ( entry in axes[0] ) {
comboEntry[level] = entry
if (axes.size() > 1 ) {
comboBuilder(axes[1..-1], level + 1)
}
else {
tasks[comboEntry.join("-")] = task(comboEntry.collect())
}
}
}
stage ("Setup") {
node {
println "Initial Setup"
}
}
stage ("Setup Combinations") {
node {
comboBuilder(axisList, 0)
}
}
stage ("Multiconfiguration Parallel Tasks") {
//Run the tasks in parallel
parallel tasks
}
stage("The End") {
node {
echo "That's all folks"
}
}
You can see a more detailed flow of the job at http://localhost:8080/job/multi-configPipeline/[build]/flowGraphTable/ (available under the Pipeline Steps link on the build page.
EDIT:
You can move the stage down into the "task" creation and then see the details of each stage more clearly, but not in a neat matrix like the multi-config job.
...
return {
// This is where the important work happens for each combination
stage ("${it.join('-')}--build") {
node(myAgent) {
println "Executing combination ${it.join('-')}"
def javaHome = tool myJdk
println "Node=${env.NODE_NAME}"
println "Java=${javaHome}"
}
//Node irrelevant for this part
node {
println "fruit=${myFruit}"
}
}
}
...
Or you could wrap each node with their own stage for even more detail.
As I did this, I noticed a bug in my previous code (fixed above now). I was passing the comboEntry reference to the task. I should have sent a copy, because, while the names of the stages were correct, when it actually executed them, the values were, of course, all the last entry encountered. So I changed it to tasks[comboEntry.join("-")] = task(comboEntry.collect()).
I noticed that you can leave the original stage ("Multiconfiguration Parallel Tasks") {} around the execution of the parallel tasks. Technically now you have nested stages. I'm not sure how Jenkins is supposed to handle that, but it doesn't complain. However, the 'parent' stage timing is not inclusive of the parallel stages timing.
I also noticed is that when a new build starts to run, on the "Stage View" of the job, all the previous builds disappear, presumably because the stage names don't all match up. But after the build finishes running, they all match again and the old builds show up again.
And finally, Blue Ocean doesn't seem to vizualize this the same way. It doesn't recognize the "stages" in the parallel processes, only the enclosing stage (if it is present), or "Parallel" if it isn't. And then only shows the individual parallel processes, not the stages within.
Points 1 and 3 are not completely clear to me, but I suspect you just want to use “scripted” rather than “Declarative” Pipeline syntax, in which case you can make your job do whatever you like—anything permitted by matrix project axes and axis filters and much more, including parallel execution. Declarative syntax trades off syntactic simplicity (and friendliness to “round-trip” editing tools and “linters”) for flexibility.
Point 2 is about visualization of the result, rather than execution per se. While this is a complex topic, the usual concrete request which is not already supported by existing visualizations like Blue Ocean is to be able to see test results distinguished by axis combination. This is tracked by JENKINS-27395 and some related issues, with design in progress.

iOS Testing: Is there a way to skip tests?

I don't want to execute certain tests if the feature is currently disabled. Is there a way to "skip" a test (and to get appropriate feedback on console)?
Something like this:
func testSomething() {
if !isEnabled(feature: Feature) {
skip("Test skipped, feature \(feature.name) is currently disabled.")
}
// actual test code with assertions here, but not run if skip above called.
}
You can disable XCTests run by Xcode by right clicking on the test symbol in the editor tray on the left.
You'll get this menu, and you can select the "Disable " option.
Right clicking again will allow you to re-enable. Also, as stated in user #sethf's answer, you'll see entries for currently disabled tests in your .xcscheme file.
As a final note, I'd recommend against disabling a test and committing the disabling code in your xcscheme. Tests are meant to fail, not be silenced because they're inconvenient.
Another possible solution which I found in some article: prefix your skipped tests with something like "skipped_"
Benefits:
XCode will not treat them as tests
You can easily find them using search
You can make them tests again, replacing "skipped_" to ""
Beginning with Xcode 11.4 you'll be able to using XCTSkipUnless(_:_:file:line:).
The release notes read,
XCTest now supports dynamically skipping tests based on runtime
conditions, such as only executing some tests when running on certain
device types or when a remote server is accessible. When a test is
skipped, Xcode displays it differently in the Test Navigator and Test
Report, and highlights the line of code where the skip occurred along
with an optional user description. Information about skipped tests is
also included in the .xcresult for programmatic access.
To skip a test, call one of the new XCTSkip* functions from within a
test method or setUp(). For example:
func test_canAuthenticate() throws {
try XCTSkipIf(AuthManager.canAccessServer == false, "Can't access server")
// Perform test…
}
The XCTSkipUnless(::file:line:) API is similar to
XCTSkipIf(::file:line:) but skips if the provided expression is
false instead of true, and the XCTSkip API can be used to skip
unconditionally. (13696693)
I've found a way to do this by modifying my ui test .xcscheme file and adding a section called SkippedTests under TestableReference, then adding individual Test tags with an 'Identifier' attribute with the name of your class and test method. Something like:
<SkippedTests>
<Test Identifier="ClassName/testMethodName" />
</SkippedTests>
Hope this helps
From Xcode 11.4+, you can use XCTSkipIf() or XCTSkipUnless().
try XCTSkipIf(skip condition, "message")
try XCTSkipUnless(non-skip condition, "message")
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests#overview
This is what test schemes are meant to do.
You can have different schemes targeting different testing situations or needs.
For example, you may want to create a scheme that runs all your tests (full regression scheme), or you may want to select a handful of them to do a quick smoke test on your app when small changes are made.
This way, you can select different schemes according to how much testing you need to do.
Just go to
Product >> Scheme
It's not that universal, but you can override invokeTest in XCTestCase and avoid calling super where necessary. I'm not sure about the appropriate feedback in console though.
For instance the following fragment makes the test run only on iOS Simulator with iPhone 7 Plus/iPad Pro 9.7"/iOS 11.4:
class XXXTests : XCTestCase {
let supportedModelsAndRuntimeVersions: [(String, String)] = [
("iPhone9,2", "11.4"),
("iPad6,4", "11.4")
]
override func invokeTest() {
let environment = ProcessInfo().environment
guard let model = environment["SIMULATOR_MODEL_IDENTIFIER"], let version = environment["SIMULATOR_RUNTIME_VERSION"] else {
return
}
guard supportedModelsAndRuntimeVersions.contains(where: { $0 == (model, version) }) else {
return
}
super.invokeTest()
}
If you use Xcode 11 and TestPlan, you can tweak your configuration to skip or allow specific tests. Xcode TestPlan is a JSON format after all.
By default, all tests are enabled, you can skip a list of tests or test file.
"testTargets" : [
{
"skippedTests" : [
"SkippedFileTests", // skip the whole file
"FileTests\/testSkipped()" // skip one test in a file
]
...
On the opposite, you can also skip all tests by default and enable only few.
"testTargets" : [
{
"selectedTests" : [
"AllowedFileTests", // enable the whole file
"FileTests\/testAllowed()" // enable only a test in a file
]
...
I'm not sure if you can combine both configurations though. It flips the logic based on Automatically includes new tests.
Unfortunately, there is no build-in test case skipping. The test case either passes or fails.
That means you will have to add that functionality by yourself - you can add a function to XCTestCase (e.g. XCTestCase.skip) via a category that will print the information into console. However, you will have to put a return after that to prevent the other asserts from running.
While there are answers which covered almost similar logic, if you don't want to have an extra file to manage conditions then you can mark your test function with throws then use XCTSkip with a nice description to explain why it is skipped. Note a clear message is important as it will make it easy for you to just read it on Report Navigator and understand why it is skipped without having to open a related XCTestCase.
Example:
func test_whenInilizedWithAllPropertiesGraphQLQueryVariableDict_areSetCorrectly() throws {
// Skip intentionally so that we can remember to handle this.
throw XCTSkip("This method should be implemented to test equality of NSMutableDictoinary with heterogenious items.")
}
Official iOS documentation
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests
Use XCTSkipIf() or XCTSkipUnless() when you have a Boolean condition that you can use to evaluate when to skip tests.
Throw an XCTSkip error when you have other circumstances that result in skipped tests. For example:
func testSomethingNew() throws {
guard #available(macOS <#VersionNumber#>, *) else {
throw XCTSkip("Required API is not available for this test.")
}
// perform test using <#VersionNumber#> APIs...
}
There is no test case skipping. You can use if-else block:nested and run/print your desired output.

Resources