Fortify Rescan issues - fortify

Fortify Real world scenario issue:
The real issues I consistently having is not in actual remediation of fortify issues, but rather in being reliably suppressed any finding that are determined to be false-positives. I can suppress them in the report - that I confident about that, but that still doesn't prevent the same issues from being identified in a subsequent scan of the code. And that, in turn, involves significant time on my part to suppress them EVERYtime we run a scan.
So I may be deploying changes to the same code files several times throughout the year. so every time I need to spend some significant time to remove false positive on the code.
My flow: -
scan --> identify fasle positive --> supress in report --> deploy --> again make changes --> scan --> identify fasle positive -->supress in report --> deploy. this process repeats..
Is there any way to overcome these kinds of repeated problems so that will help me a lot.

The problem I think you're experiencing requires merging the FPR (Fortify Project Report). If you perform analysis in one FPR and then do another scan, there needs to be a merge to bring the previous analysis forward. Some of the Fortify products do this automatically. Software Security Center, VS Studio Plugin, and Eclipse Plugin automatically merge the new FPR with the old FPR. You can also manually merge the FPR file using Audit Work Bench (Its under Tools>Merge Audit Projects) or you can use the command line using the FPR Utility. The command would be:
FPRUtility -merge -project <primary.fpr> -source <secondary.fpr> -f <output.fpr>

Related

SonarQube stops and starts using quality profiles (by itself ?)

We are using SonarQube to analyse the code that we checkin to TFS. Every time a developer performs a checkin, the new/changed code is being analysed. This mechanism did well for us, until last friday. All of a sudden SonarQube told as we had thousands of code smells/bugs/vulnerabilities in 1 of our 34 projects. Trying to understand where this came from, I saw a quirk in the SonarQube activities:
Performing an analysis, SonarQube says 'Quality Profile: Stop using '[x] way'' for 3 quality profiles
Performing the next analysis, SonarQube says 'Quality Profile: Use '[x] way''
This change in using profiles is also visible in the activity graph:
We made no changes to the quality profiles between these analysis. How SonarQube is being triggered is also not changed in this period. Also, no changes/updates are being made to any of the involving systems between these analysis.
Right now I'm completely in the dark about what have could caused the change in (not) using some quality profiles. My question therefore is:
Has anybody encountered this before or can anybody shine a light on where to look for where this came from?

How to have SonarQube block code on failure of ci build?

We are standing up a CI pipeline using Jenkins and we are using SonarQube to run static analysis. We have set up quality gates and now we are failing builds, when the gates are not met. When we fail a build the code is still put into sonarQube. So if a developer tries to promote twice the second build will 'pass'.
Example:
Gate is no new critical issues.
The Developer checks in code with 1 new critical issue.
The build fails on static analysis (SonarQube has the rule flagged and a blocker).
The Developer checks in code again (no code changes).
the static analysis's passes because the critical issue is not 'new'.
Is there a way to revert back to the previous version on a failure, or better yet to run the analysis against the most current non-failing run?
Notes: Version - Sonarqube 5.1.2
You've asked how to keep committed code from being reflected in the SonarQube platform.
I don't recommend trying to suppress the analysis of committed code because then your quality measures don't reflect the state of your code base. Imagine that someone is making a decision about whether HEAD is releasable based on what they see in the SonarQube UI. If you're keeping "bad" code from showing up, then... what's the point of analyzing at all? Just use a photo editor to construct the "perfect" dashboard, and serve that gif whenever someone hits http://sonarqube.myco.com:9000.
Okay, that may be an extreme reaction, but you get my point. The point of SonarQube analysis is to show you what's right and wrong with your code. Having a commit show up in the SonarQube UI should not be an honor you "earn" by committing "worthy" code. It should be the baseline expectation. Like following the team's coding conventions and using an SCM.
But this rant doesn't address your problem, which is the fact that your current quality gate is based on last_analysis. On that time scale "new critical issues" is an ephemeral measure, and you're loosing the ability to detect new critical issues because they're "new" one minute and "old" the next. For that reason, I advise you to change your time scale. Instead of looking at "new" versus last analysis, compare to last version, last month (over 30 days), or last year (over 365 days). Then you'll always be able to tell what's "new" versus the accepted baseline.

Code review in TFS

I am new in configuration of TFS.
Currently our project is 50% done but we found that we have very bad code. We consider the need for static code analysis like Resharper or another product like StyleCop, CodeAnalysis and FxCop.
We want configure the TFS to reject a checkin when that check in contains code that triggers code analysis warnings.
But for the previous code we want to suppress the existing warning to prevent the code from becoming worse than it already is.
As Ivan mentions, your root cause it not in the lack of analysis tools, but probably in the level of quality and rigor agreed (or currently being enforeced between team members) between the development team and their project's sponsor. It may be that the pressure on the team is too high, causing important review actions to be skipped, or that the team (or the sponsor!) doesn't have the same desire to quality as you or the sponsor. Or that the team doesn't have the right level of knowledge to prevent these issues from happening.
The best way out of this is to fix as much as you can in a short period of time.
Warning: I've experienced with a number of teams the effect of turning on too many rules all at once. Generally, there is a reluctance for people to concede that their work hasn't been up to par and turning on rules that do not directly cause bug ("The identifier is cased incorrectly" for example) can cause frustration that can severely hamper your momentum. Carefully selecting which rules need to be solved now and which can wait for later worked much better in my experience. Once the team has developed a way to solve these kinds of problem, you can easily apply more.
Turning on Tools like configuring Code Analysis for your solution or using the Solution Wide Analysis feature of Resharper, can help you spot issues, but it won't solve them or prevent similar issues from popping up in the future unless your team stops creating them.
Tip: Note that you can turn on Resharper during your build as well using the Resharper CLI features.
StyleCop I would not enforce on this team (just yet) if the code itself is bad enough to trigger massive warnings that may hold bugs and issues. Fix these problems first, make the code it pretty later. Your priorities are now to remove any possible bugs.
CodeAnalysis and FxCop are the same things, so you won't need to turn on both. A tool like Resharper can help your developers to quickly remove a lot of the issues by using the magic-key ALT+ENTER.
If you want to create a clean baseline you can run code analysis once, then select all warnings that are generated and then select Suppress in global suppression file. This will work for Code Analysis issues, but won't suppress any Compiler Warnings, there is no easy way to quickly suppress all current compiler warnings.
Tip: It sometimes helps to temporarily rename any existing globalsupressions.cs files, so that this "baseline" is stored separately. You then know which warnings you'll have to fix at a later point in time.
Tip: When a developer suppresses a warning, have them add a Justification="Reason for suppression" to the suppression that is generated, that way you can distinguish between carefully considered suppression and temporary ones.
Depending on whether you already have a build server your next step is to install Team Build and once you have a build server you'll need to setup a Build Definition. This blog post covers most of the steps.
In the build definition set the trigger to "Gated Checkin" and on the Process tab make sure you set Code Analysis to "Always". If you want to fail your build based on Code Analysis errors, you need to create a custom ruleset and configure that for your solution.
To have compiler errors fail the build you can also enable the "Treat Warnings as Errors
Once you have enabled your gated check-in build all developers changes will be prompted to wait for their build to finish. You can turn on alerts (using Web access) or use the Build Notification Tool to get notified when the changes were successfully submitted.
Tip: Instead of turning on all rules at once (or switching them all to cause an ERROR during builds) you can also opt to turn on rules a couple at a time and fix them. Turning on rules by category gives you a nice opportunity to teach people the importance of the rules being turned on and possible solutions for fixing them.
A far more advanced solution would be to install and configure SonarQube alongside your Team Build environment. The ALM Rangers and Sonar have recently worked together to create installation guidance and a number of extensions to enable Team Build and SonarQube integration. You can find the installation guide here.

How to diff Fortify SCA scans

We have Fortify SCA and we are setting up regular, automated scans of our source code. Our intention is to have an alert if there is an introduced security issue. Is there a way, perhaps using FPRUtility (or some other method) to accomplish this? Ultimately I prefer something that can be easily run from the command line, but if this can also be accomplished using the GUI then I would appreciate knowing how to do that as well.
Use Audit Workbench to run a report. Choose "developer workbook" and disable all except one section. (you can choose any section you want).
In the report section's additional properties, set the filter for the issues to [issue age]:new. This means the report will show ONLY issues in your FPR that were not present in the previous scan, and were introduced in the latest scan. Save the template.
In your scan configuration, make sure to scan to the same FPR every time per project, so that "new" issues can be calculated by the report runner.
After the scan is complete, use the answer by #user1836982 to run the report. Choose the XML template and process it programmatically.
(1) Command for the Fortify report generation to XML FORMAT:
FORTIFY_INSTALL_DIR\bin\ReportGenerator.bat -format xml -f target_file_name.xml -source your_fpr_file_name.fpr -template Detailed-DefaultReportDefinition.xml
(2) you can also use AWB to generate the .pdf/.rtf/.xml report by Report(top menu bar) -> save report -> select format ->save
(3) Just added procedure to create excel sheet here: Export HP Fortify SCA 4.10 results in EXCEL format
(4) If you have access to DB (oracle), you can query with script
If you are using Fortify SCA, you should also have access to Fortify Software Security Center (SSC). SSC can be used to track trending data across builds of a project. SSC has built in capabilities to send out alerts based on user-defined events within SSC; I have never worked with those so can't offer any thoughts other than what the docs say.
The reports generated by Fortify SCA (.fpr files) are zip files XML documents storing all the relevant data; I would suspect some of the data in those files are related to the SCA rulesets that are present in both SCA and SSC instances. I suspect without the rulesets you would be able to determine that new issues have been introduced, but not any good data on what they are, priority level, etc.

Bug/issue tracking integration with Cruise control

I am putting together a bunch of applications to create an automatic building for microsoft platform (the products I chose and the software I will build, both, runs on windows). The products I've chosen are:
Code repository: SubVersion
Continuous integration: CruiseControl
Unit testing: NUnit
Test coverage: NCover
Static code analysis: FXCop
Now I need to choose a bug/issue tracking system (free if possible) that can be, in some way, integrated with the previous products.
What I mean by integration? Well, all these products have a file as output I want to be able to publish errors and bugs found by them into the tracking system.
Do you know some product, some technique or trick that can help me to do this?
Thanks in advance.
First off, these are all tools I have experience with and congratulate you on your choices - these tools will serve you well if you use them wisely.
The most common usage of these tools is that CC would fail the build if certain criteria are not met, e.g.:
A unit test fails
Code coverage falls below a certain threshold
FXCop detects a violation of a certain severity
Because the build would fail and in continuous integration a failed build should be fixed immediately, you wouldn't really need to put those issues into a bug tracking system. Think of build-failing errors as being as severe as the code not compiling - you drop everything and fix right away.

Resources