With codecov, you can set the target coverage for your project. Assuming a project has 90% coverage right now,
coverage:
status:
project:
default:
target: auto
threshold: 1%
this will allow the next pr to reduce the coverage only to a min. of 89%. But with subsequent PRs this could keep dropping. Whereas,
coverage:
status:
project:
default:
target: 80%
will only allow the PRs which maintain the project coverage above 80%. But a single PR might drop the coverage from 90% to 80%.
Is it possible to have configurations which is like a combination of these two, where a new pr can reduce the project coverage by a max of 1% but the coverage should not go below 80%.
You should be able to set both the target value and the threshold value.
You can also leverage flags to track coverage more granularly.
You can also set "patch" coverage targets, which might be more important since that will only "consider" the newly added or modified production code. Thus you can start with a low project coverage, but have stricter standards for your patch/diff coverage.
Related
we are using pitest plugin in sonarqube and set up a quality gate on mutation coverage, since it is much more valuable than other coverage measurements.
Only, when a project is pushed that does not use pitest, the quality gate is not triggerd at all.
Is it possible to define a quality gate to detect that pitest is not defined for a project at all?
Thank you
Simple answer to your question: no, it's not. Quality gate conditions can not be triggered by the absence of a metric. So a metric has to be computed.
But the most recent version of sonar-pitest-plugin (0.9) does only compute coverage information when Mutation Analysis data is present (= a pitest report exists).
The change however is not overly complicated, if you need an urgent fix, check the PitestComputer class at line 84 and add the following else block:
if (mutationsTotal != null) {
...
} else {
context.addMeasure(PitestMetrics.MUTATIONS_KILLED_PERCENT_KEY, 0.0);
}
Compile it and install it manually to your instance.
That said, I want to give you a short heads up, that there is a newer plugin addressing Mutation Analysis in SonarQube (full disclosure: I'm the author), with several new features, rules etc.
The plugin is available via the market place (named "Mutation Analysis").
The plugin has the same limitation as the sonar-pitest-plugin, but I just created a new issue addressing your problem:
https://github.com/devcon5io/mutation-analysis-plugin/issues/13
Edit:
This feature is implemented in version 1.3
Scenario
I am running jest (or a React project) to test my files.
Goal
I would like to see the coverage report for all my source files and then by running jest in --watch mode see only the changed files report change (hopefully improve) while leaving the untested files report untouched.
Problem
Right now, while running jest --config=config/jest/config.json --watch --coverage, and using the configuration "collectCoverageFrom": ["src/**/*.{tsx,ts}"] the coverage related with the modified file are updated properly BUT the coverage for the UNTESTED files goes back to 0% (since no test where run for those files and any previous test run got forgotten)
Question
Is there a way to retain the old tests coverage reports for the untested files?
Why?
Imagine us having 100 files in the src folder. At the beginning all the coverage is 0%. Then you start jest in watch mode and you fix 1 file. The coverage shows 100% for that file, 0% for the other files. Total coverage of 1%.
Then you change a second file (the day after). Your total coverage should be 2%, instead since TODAY you changed only 1 file, the overall coverage is reported as 1%.
The correct total coverage is reported only when you run a full test+report without using the watch mode, but this makes the process non-practical since I would like to fix progressively my tests, watch for file changes and see the coverage grow as I fix.
Similar reported issues
https://github.com/facebook/jest/issues/2256 it was closed by I still don't understand how it would fix the problem
Ideas
I don't mind if to have this report I need to adapt jest/istanbul to do some magic merge of reports. I tried to follow some comments on github but couldn't come up with something that works
We are standing up a CI pipeline using Jenkins and we are using SonarQube to run static analysis. We have set up quality gates and now we are failing builds, when the gates are not met. When we fail a build the code is still put into sonarQube. So if a developer tries to promote twice the second build will 'pass'.
Example:
Gate is no new critical issues.
The Developer checks in code with 1 new critical issue.
The build fails on static analysis (SonarQube has the rule flagged and a blocker).
The Developer checks in code again (no code changes).
the static analysis's passes because the critical issue is not 'new'.
Is there a way to revert back to the previous version on a failure, or better yet to run the analysis against the most current non-failing run?
Notes: Version - Sonarqube 5.1.2
You've asked how to keep committed code from being reflected in the SonarQube platform.
I don't recommend trying to suppress the analysis of committed code because then your quality measures don't reflect the state of your code base. Imagine that someone is making a decision about whether HEAD is releasable based on what they see in the SonarQube UI. If you're keeping "bad" code from showing up, then... what's the point of analyzing at all? Just use a photo editor to construct the "perfect" dashboard, and serve that gif whenever someone hits http://sonarqube.myco.com:9000.
Okay, that may be an extreme reaction, but you get my point. The point of SonarQube analysis is to show you what's right and wrong with your code. Having a commit show up in the SonarQube UI should not be an honor you "earn" by committing "worthy" code. It should be the baseline expectation. Like following the team's coding conventions and using an SCM.
But this rant doesn't address your problem, which is the fact that your current quality gate is based on last_analysis. On that time scale "new critical issues" is an ephemeral measure, and you're loosing the ability to detect new critical issues because they're "new" one minute and "old" the next. For that reason, I advise you to change your time scale. Instead of looking at "new" versus last analysis, compare to last version, last month (over 30 days), or last year (over 365 days). Then you'll always be able to tell what's "new" versus the accepted baseline.
I am currently using Karma's coverage within a project, I would like to enforce a threshold for the coverage to be set at and therefore make my builds on Circle CI to fail and go red due to it being lower than a set percentage.
You can install karma-threshold-reporter and configure it as shown in the README.
Does somebody know about the following scenario:
I need to
calculate an apex code coverage during deployment with Migration tool
give the result on console
after that I need to decide about continuation deployment. If I will give less that 88% (for instance) of code coverage I need to mark build as failed in Jenkins and to stop a deployment, in other case I can go ahead.
So, what I need is just ability to set any value (greater 75) as required test coverage for my project.