Can nyc generate a combined coverage report, that shows which individual reports cover which part of the file? - code-coverage

I'm converting Mocha tests to Jest, and want to maintain coverage
I have figured out how to combine coverage, based on Merging coverage reports from nyc and some other similar questions.
However, when combining the two .json files, the numbers are just added up.
Here's a mock up of what I'm imagining:
I don't think this is possible.
I've looked at the generated reports, the code used to combine coverage and hunted around the docs and issues for nyc.
But on the off chance I missed something, I thought I'd ask here.
Is this, or something very similar, possible?

Related

understanding Eclemma results

I am trying to understand the concept of code coverage and a complete novice to this topic.
I am using Eclemma to measure the code coverage of an open-source code. Can somebody help me to know what are important insights I should consider into the below snapshot?
Code coverage is a metric, which expresses which portion of the (application) code gets executed when you run your test cases. However, it is just a measure of completeness; it provides no information about how thoroughly the executed code was tested by the test cases.
In your screenshot, the third line of the table (src/main/java) is a relevant one. It expresses that the application code consists of 3,846 (bytecode) instructions; out of these, roughly 67% were executed (presumably by the automated test cases residing in src/test/java). This means that the test cases cannot reveal any fault in one third of the whole application because the test cases do not touch that code at all. The remaining code (other two thirds) is executed by at least one test case. Test cases can reveal faults in this code; how effectively they do depends on their used input data and oracles.
Note that it is often not possible or sensible to achieve 100% coverage.

In Travis-CI, how can I have a different matrix for pull requests?

Is there a proper way to create a build matrix specific for Pull Requests?
The idea is:
In normal builds, I want to test a few things only (code style/standards, some unit tests, some general validation). Mostly one item only in the build matrix.
In pull requests, I want to run the tests with several different environments, including different databases and versions. This is what I currently have but it demands a lot from travis (and it is slow).
I know I can achieve that in the script by checking TRAVIS_PULL_REQUEST and skipping the tests, but that will misleadingly show some environments as "passed" when they were actually not tested.
Thank you for any help / guidance,
Daniel
Interesting wish!
This is not possible at the moment. You might want to chime in on https://github.com/travis-ci/beta-features/issues/11 to bring it to the attention of the relevant people.

JBehave Vs FitNesse

If your system is basically crunching numbers i.e. given a set of large inputs, run a process on them, and then assert the outputs, which is the better framework for this?
By 'large inputs', I mean we need to enter data for several different, related entities.
Also, there are several outputs i.e. we don't just get one number at the end.
If you find yourself talking through different examples with people, JBehave is probably pretty good.
If you find yourself making lists of numbers and comparing inputs with outputs, Fitnesse is probably better.
However, if you find yourself talking to other devs and nobody else, use plain old JUnit. The less abstraction you have, the quicker it will be to run and the easier it will be to maintain.

External data source with specflow

I find entering the data in the feature file of specflow very painful specially when it is repetitive and large data. Can we use an external data source like spreadsheet to enter this data and then use this external datasource in the feature file?
It's theoretically possible, but probably so much effort that you wouldn't want to do it.
The problem is that the feature file is simply a human readable form. When it is saved in Visual Studio it is parsed and converted into the feature.cs file and that is the one that is compiled and used for testing.
So your process would become
edit spreadsheet
export to feature file
get specflow's VS plugin to convert to feature.cs
run msbuild
run tests via Nunit or similar
I wouldn't do this. Instead I'd focus on getting my tests to be better examples. It sounds like you are to trying to exhaustively cover every possibility. Don't come up with examples to cover every possible case, but instead cover as much logic as possible with fewer tests.

overall code coverage for specific developers in TFS

I was just wondering if there was a way to analyse TFS, to figure out code coverage results against a specific developer.
Say I want to determine the code coverage statistics for one of the developers in my team.
It’s not supported out of the box.
Don’t think it will be in the future, since its a very hard measurement:
Developes write source code. Source code within a project is usually divided in functionality not the developers. Code coverage results is the coverage of a given assembly being exercised in a testrun. We therefore need to analyse the coverage per code line and relate that code line back to a given developer changeset. Regardless of the unittest dll being instrumented, both the code in the unittest as the code being exercised is involved in the code coverage result. So which codeline being covered is counted for a specific developer ? The lines unittest, a line in a shared library, a line that was changed by 4 developers (is coverage shared), othes issues ?
But why are you asking this question, if you are trying to improve the quality of a specific individual code reviews and pair programming would be a more efficient approach. Even if it would be possible, terrorizing individuals on their code coverage would only result in dysfunctional measurement. Source code of a given product is shared in a Team, therefore the team is responsible for the coverage. Require your team to take that responsibility.

Resources