SonarQube stops and starts using quality profiles (by itself ?) - tfs

We are using SonarQube to analyse the code that we checkin to TFS. Every time a developer performs a checkin, the new/changed code is being analysed. This mechanism did well for us, until last friday. All of a sudden SonarQube told as we had thousands of code smells/bugs/vulnerabilities in 1 of our 34 projects. Trying to understand where this came from, I saw a quirk in the SonarQube activities:
Performing an analysis, SonarQube says 'Quality Profile: Stop using '[x] way'' for 3 quality profiles
Performing the next analysis, SonarQube says 'Quality Profile: Use '[x] way''
This change in using profiles is also visible in the activity graph:
We made no changes to the quality profiles between these analysis. How SonarQube is being triggered is also not changed in this period. Also, no changes/updates are being made to any of the involving systems between these analysis.
Right now I'm completely in the dark about what have could caused the change in (not) using some quality profiles. My question therefore is:
Has anybody encountered this before or can anybody shine a light on where to look for where this came from?

Related

Issues get reopened and comment history disappears on subsequent analysis

We're using SonarQube 5.6.6 (LTS) with SonarC# plugin version 6.5.0.3766 and TFS 2017 update 2 (SonarQube plugin v. 3.0.2).
We're repeatedly seeing, that issues that were previously marked as resolved (Won't fix) get reopened. My question is: Why does SonarQube behave in this way?
This issue is also mentioned in a number of different posts(1,2,3) on StackOverflow but with no root cause or resolution. Link 3 also states that this is an issue using SonarQube 6.2.
I'm curious as to whether this is due to a misconfiguration on our part or an integrated part of SonarQube?
Our SonarQube server is running on a Win 2012 R2 server with a MS SQL backend if thats relevant?
Furthermore, we're using TFVC for versioning and get blame through the SCM plugin. If an issues has been marked as resolved (won't fix), I've noticed that it appears to be as opened as a new issue (i.e. no history available).
Example: A colleague marked an issue as resolved (won't fix) in a file which was last modified in TFVC back in november 2015. However, this morning the issue was marked as open and assigned to the developer who originally checked in the code There is no history in SonarQube of the issue having previously been in a resolved state. It appears as if it's a new issue in a new file instead of being a known issue which has already been resolved?
To avoid weird issues related to compiling our C# solution we always clean our workspace completely prior to our build. I don't know if that has something to say? Our builds are also executed on different build machines so I don't know if that will make SonarQube think that we're indeed always analyzing new files?
Could the use of different build machines and/or build definitions for the same SonarQube project result in this behavior?
I've also seen from the logs and reports, that SonarQube appears to analyze the ENTIRE solution and not only the changed files. This makes our analysis very time consuming and not at all suitable in a fast feedback loop. I suspect the long analysis period and the issues reopening is related.
Analysis of a projekt with approx 280 KLOC takes approx. 8-10 min. as a background task on the server. That's on subsequent analysis (i.e. not the first run).
Is this related to the above mentioned problem of issues getting reopened by the server?
Strangely enough, leak periods appear to work correctly, i.e. we correctly identify issues within the right leak period. I've not verified this in detail yet, but it's definitely not ALL old issues that get reported as open within the leak period. We only see old issues reappear, if the file that contains them has been modified - this activates all the other issues (from a previous version or leak period) within that file.
I've not specified any additional cmdline parameters for the SonarQube scanner during builds apart from the TFVC SCM plugin and path for coverage data.
We're using the VSTEST task v. 2 as otherwise it's not possible to collect code coverage in SonarQube when using TFS 2017 update 2 and VS 2017.
Please advise of any further data that I should supply to help this further.
Thank you for your help!

Fortify Rescan issues

Fortify Real world scenario issue:
The real issues I consistently having is not in actual remediation of fortify issues, but rather in being reliably suppressed any finding that are determined to be false-positives. I can suppress them in the report - that I confident about that, but that still doesn't prevent the same issues from being identified in a subsequent scan of the code. And that, in turn, involves significant time on my part to suppress them EVERYtime we run a scan.
So I may be deploying changes to the same code files several times throughout the year. so every time I need to spend some significant time to remove false positive on the code.
My flow: -
scan --> identify fasle positive --> supress in report --> deploy --> again make changes --> scan --> identify fasle positive -->supress in report --> deploy. this process repeats..
Is there any way to overcome these kinds of repeated problems so that will help me a lot.
The problem I think you're experiencing requires merging the FPR (Fortify Project Report). If you perform analysis in one FPR and then do another scan, there needs to be a merge to bring the previous analysis forward. Some of the Fortify products do this automatically. Software Security Center, VS Studio Plugin, and Eclipse Plugin automatically merge the new FPR with the old FPR. You can also manually merge the FPR file using Audit Work Bench (Its under Tools>Merge Audit Projects) or you can use the command line using the FPR Utility. The command would be:
FPRUtility -merge -project <primary.fpr> -source <secondary.fpr> -f <output.fpr>

How to have SonarQube block code on failure of ci build?

We are standing up a CI pipeline using Jenkins and we are using SonarQube to run static analysis. We have set up quality gates and now we are failing builds, when the gates are not met. When we fail a build the code is still put into sonarQube. So if a developer tries to promote twice the second build will 'pass'.
Example:
Gate is no new critical issues.
The Developer checks in code with 1 new critical issue.
The build fails on static analysis (SonarQube has the rule flagged and a blocker).
The Developer checks in code again (no code changes).
the static analysis's passes because the critical issue is not 'new'.
Is there a way to revert back to the previous version on a failure, or better yet to run the analysis against the most current non-failing run?
Notes: Version - Sonarqube 5.1.2
You've asked how to keep committed code from being reflected in the SonarQube platform.
I don't recommend trying to suppress the analysis of committed code because then your quality measures don't reflect the state of your code base. Imagine that someone is making a decision about whether HEAD is releasable based on what they see in the SonarQube UI. If you're keeping "bad" code from showing up, then... what's the point of analyzing at all? Just use a photo editor to construct the "perfect" dashboard, and serve that gif whenever someone hits http://sonarqube.myco.com:9000.
Okay, that may be an extreme reaction, but you get my point. The point of SonarQube analysis is to show you what's right and wrong with your code. Having a commit show up in the SonarQube UI should not be an honor you "earn" by committing "worthy" code. It should be the baseline expectation. Like following the team's coding conventions and using an SCM.
But this rant doesn't address your problem, which is the fact that your current quality gate is based on last_analysis. On that time scale "new critical issues" is an ephemeral measure, and you're loosing the ability to detect new critical issues because they're "new" one minute and "old" the next. For that reason, I advise you to change your time scale. Instead of looking at "new" versus last analysis, compare to last version, last month (over 30 days), or last year (over 365 days). Then you'll always be able to tell what's "new" versus the accepted baseline.

Code review in TFS

I am new in configuration of TFS.
Currently our project is 50% done but we found that we have very bad code. We consider the need for static code analysis like Resharper or another product like StyleCop, CodeAnalysis and FxCop.
We want configure the TFS to reject a checkin when that check in contains code that triggers code analysis warnings.
But for the previous code we want to suppress the existing warning to prevent the code from becoming worse than it already is.
As Ivan mentions, your root cause it not in the lack of analysis tools, but probably in the level of quality and rigor agreed (or currently being enforeced between team members) between the development team and their project's sponsor. It may be that the pressure on the team is too high, causing important review actions to be skipped, or that the team (or the sponsor!) doesn't have the same desire to quality as you or the sponsor. Or that the team doesn't have the right level of knowledge to prevent these issues from happening.
The best way out of this is to fix as much as you can in a short period of time.
Warning: I've experienced with a number of teams the effect of turning on too many rules all at once. Generally, there is a reluctance for people to concede that their work hasn't been up to par and turning on rules that do not directly cause bug ("The identifier is cased incorrectly" for example) can cause frustration that can severely hamper your momentum. Carefully selecting which rules need to be solved now and which can wait for later worked much better in my experience. Once the team has developed a way to solve these kinds of problem, you can easily apply more.
Turning on Tools like configuring Code Analysis for your solution or using the Solution Wide Analysis feature of Resharper, can help you spot issues, but it won't solve them or prevent similar issues from popping up in the future unless your team stops creating them.
Tip: Note that you can turn on Resharper during your build as well using the Resharper CLI features.
StyleCop I would not enforce on this team (just yet) if the code itself is bad enough to trigger massive warnings that may hold bugs and issues. Fix these problems first, make the code it pretty later. Your priorities are now to remove any possible bugs.
CodeAnalysis and FxCop are the same things, so you won't need to turn on both. A tool like Resharper can help your developers to quickly remove a lot of the issues by using the magic-key ALT+ENTER.
If you want to create a clean baseline you can run code analysis once, then select all warnings that are generated and then select Suppress in global suppression file. This will work for Code Analysis issues, but won't suppress any Compiler Warnings, there is no easy way to quickly suppress all current compiler warnings.
Tip: It sometimes helps to temporarily rename any existing globalsupressions.cs files, so that this "baseline" is stored separately. You then know which warnings you'll have to fix at a later point in time.
Tip: When a developer suppresses a warning, have them add a Justification="Reason for suppression" to the suppression that is generated, that way you can distinguish between carefully considered suppression and temporary ones.
Depending on whether you already have a build server your next step is to install Team Build and once you have a build server you'll need to setup a Build Definition. This blog post covers most of the steps.
In the build definition set the trigger to "Gated Checkin" and on the Process tab make sure you set Code Analysis to "Always". If you want to fail your build based on Code Analysis errors, you need to create a custom ruleset and configure that for your solution.
To have compiler errors fail the build you can also enable the "Treat Warnings as Errors
Once you have enabled your gated check-in build all developers changes will be prompted to wait for their build to finish. You can turn on alerts (using Web access) or use the Build Notification Tool to get notified when the changes were successfully submitted.
Tip: Instead of turning on all rules at once (or switching them all to cause an ERROR during builds) you can also opt to turn on rules a couple at a time and fix them. Turning on rules by category gives you a nice opportunity to teach people the importance of the rules being turned on and possible solutions for fixing them.
A far more advanced solution would be to install and configure SonarQube alongside your Team Build environment. The ALM Rangers and Sonar have recently worked together to create installation guidance and a number of extensions to enable Team Build and SonarQube integration. You can find the installation guide here.

Sonar execution via jenkins, sonar processing continue even after ANALYSIS SUCCESSFUL message

I am facing an issue while running sonar through Jenkins, after configuration when I making build by build now trigger my build runs and creating EAR successfully then sonar deployment starts which also run successfully and showing ANALYSIS SUCCESSFUL in the end of Jenkins build processing but even after successful analysis of sonar build continue to process and it never ends even after a long time wait. Very last line of build processing is
"12:55:14.159 INFO - <- Delete aborted builds"
pls see attached screenshot to refer.
can anybody help me out over this issue?
what is the reason behind this continuous processing of sonar analysis?
it never complete. what should I do at my end to complete build process so that my build become successful in the end?
Try restarting Jenkins. It solved the issue for me.
The ANALYSIS SUCCESSFUL message just means that all sensors and decorators have completed. There are still some post-analysis tasks which must take place (specifically, any class extending PostJob). I've found that the final log output message is not always an accurate indicator of what's wrong. There are some plugins which churn forever but don't produce any output. But I wouldn't be surprised if your analyses really are stuck at "Delete aborted builds." Sometimes the Database Cleaner can take a long time, but if it's more than 10 minutes, it's stuck. It's very likely a problem with database interaction.
The way to continue is to enable all possible SQL tracing. Enable all the options -- sonar.showProfiling=true, sonar.showSql=true, sonar.showSqlResults=true, and sonar.verbose=true. See the Sonar Analysis Parameters for more information.
If that doesn't tell you what's wrong, you might have some luck getting more information out of sonar.log by editing wrapper.conf to show DEBUG log output.

Resources