FortiFY struck at 63% while running the java project by excluding HPE Security Fortify Secure Coding Rules. Extended JavaScript - fortify

We're using FortiFY 4.21 version on linux platform for Java Projects.
Yesterday we've scanned for one project, which contains 1019 files, and it has taken almost 3 hours to complete.
i gave "-Xmx16G" since this machine has a 24 GB Ram, and other than FortiFy no other application is running on this machine.
Today on the same project, i've ran again but by excluding "HPE Security Fortify Secure Coding Rules. Extended JavaScript" from Configure RulesPack.
With this, the scan is went to 63% in an hour but it remains the same even after 6 hours. So How i can know what was the issue. How we can see on which stage it was stopped or how to see the Logs etc.
Please help.

Usually this happens because of various reasons and memory issue is one of them, can you please check fortify log if you see any heap space error then you need to increase memory size. (btw fortify 4.21 has improved memory management , you can use multiple workers for parallel scan for faster scanning)
to check log...
go to Fortify installation folder core\config\fortify.properties and see the value of com.fortify.WorkingDirectory , that will be your fortify root folder path in your machine.
go to the \Fortify\sca6.21\log\sca.log to view the scan log.
once you open the log file , check if translation is successfully completed or not , or issue is there in scan phase ?
you can manually generate log in different location by adding flag -logfile in both translation and scan phase.
Hope this helps ....

Related

Issues get reopened and comment history disappears on subsequent analysis

We're using SonarQube 5.6.6 (LTS) with SonarC# plugin version 6.5.0.3766 and TFS 2017 update 2 (SonarQube plugin v. 3.0.2).
We're repeatedly seeing, that issues that were previously marked as resolved (Won't fix) get reopened. My question is: Why does SonarQube behave in this way?
This issue is also mentioned in a number of different posts(1,2,3) on StackOverflow but with no root cause or resolution. Link 3 also states that this is an issue using SonarQube 6.2.
I'm curious as to whether this is due to a misconfiguration on our part or an integrated part of SonarQube?
Our SonarQube server is running on a Win 2012 R2 server with a MS SQL backend if thats relevant?
Furthermore, we're using TFVC for versioning and get blame through the SCM plugin. If an issues has been marked as resolved (won't fix), I've noticed that it appears to be as opened as a new issue (i.e. no history available).
Example: A colleague marked an issue as resolved (won't fix) in a file which was last modified in TFVC back in november 2015. However, this morning the issue was marked as open and assigned to the developer who originally checked in the code There is no history in SonarQube of the issue having previously been in a resolved state. It appears as if it's a new issue in a new file instead of being a known issue which has already been resolved?
To avoid weird issues related to compiling our C# solution we always clean our workspace completely prior to our build. I don't know if that has something to say? Our builds are also executed on different build machines so I don't know if that will make SonarQube think that we're indeed always analyzing new files?
Could the use of different build machines and/or build definitions for the same SonarQube project result in this behavior?
I've also seen from the logs and reports, that SonarQube appears to analyze the ENTIRE solution and not only the changed files. This makes our analysis very time consuming and not at all suitable in a fast feedback loop. I suspect the long analysis period and the issues reopening is related.
Analysis of a projekt with approx 280 KLOC takes approx. 8-10 min. as a background task on the server. That's on subsequent analysis (i.e. not the first run).
Is this related to the above mentioned problem of issues getting reopened by the server?
Strangely enough, leak periods appear to work correctly, i.e. we correctly identify issues within the right leak period. I've not verified this in detail yet, but it's definitely not ALL old issues that get reported as open within the leak period. We only see old issues reappear, if the file that contains them has been modified - this activates all the other issues (from a previous version or leak period) within that file.
I've not specified any additional cmdline parameters for the SonarQube scanner during builds apart from the TFVC SCM plugin and path for coverage data.
We're using the VSTEST task v. 2 as otherwise it's not possible to collect code coverage in SonarQube when using TFS 2017 update 2 and VS 2017.
Please advise of any further data that I should supply to help this further.
Thank you for your help!

VSTS agent very slow to download artifacts from local network share

I'm running an on-prem TFS instance with two agents. Agent 1 has a local path where we store our artifacts. Agent 2 has to access that path over a network path (\agent1\artifacts...).
Downloading the artifacts from agent 1 takes 20-30 seconds. Downloading the artifacts from agent 2 takes 4-5 minutes. If from agent 2 I copy the files using explorer, it takes about 20-30 seconds.
I've tried adding other agents on other machines. All of them perform equally poorly when downloading the artifacts but quick when copying manually.
Anyone else experience this or offer some ideas of what might work to fix this?
Yes It's definitely the v2 that's causing the problem.
Our download artifacts step has gone from 2mins to 36mins. Which is completely unacceptable. Im going to try out agent v2.120.2 to see if that's any better...
Agent v2.120.2
I think it's because of the amount of files in our artifacts, we have 3.71GB across 12,042 files in 2,604 Folders!
The other option I will look into it zipping or creating a nuget package for each public artifact and then after the drop, unzipping! Not the ideal solution but something I've done before when needing to use RoboCopy which is apparently what this version of the Agent uses.
RoboCopy is not great at handling lots of small files, and having to create a handle for each file across the network adds a lot of overhead!
Edit:
The change to the newest version made no difference. We've decided to go a different route and use an Artifact type of "Server" rather than "File Share" which has sped it up from 26 minutes to 4.5 minutes.
I've found the source of my problem and it seems to be the v2 agent.
Going off of Marina's comment I tried to install a 2nd agent on 01 and it had the exact same behavior as 02. I tried to figure out what was different and then I noticed 01's agent version is 1.105.7 and the new test instance is 2.105.7. I took a stab in the dark and installed the 1.105.7 on my second server and they now have comparable artifact download times.
I appreciate your help.

Fortify Rescan issues

Fortify Real world scenario issue:
The real issues I consistently having is not in actual remediation of fortify issues, but rather in being reliably suppressed any finding that are determined to be false-positives. I can suppress them in the report - that I confident about that, but that still doesn't prevent the same issues from being identified in a subsequent scan of the code. And that, in turn, involves significant time on my part to suppress them EVERYtime we run a scan.
So I may be deploying changes to the same code files several times throughout the year. so every time I need to spend some significant time to remove false positive on the code.
My flow: -
scan --> identify fasle positive --> supress in report --> deploy --> again make changes --> scan --> identify fasle positive -->supress in report --> deploy. this process repeats..
Is there any way to overcome these kinds of repeated problems so that will help me a lot.
The problem I think you're experiencing requires merging the FPR (Fortify Project Report). If you perform analysis in one FPR and then do another scan, there needs to be a merge to bring the previous analysis forward. Some of the Fortify products do this automatically. Software Security Center, VS Studio Plugin, and Eclipse Plugin automatically merge the new FPR with the old FPR. You can also manually merge the FPR file using Audit Work Bench (Its under Tools>Merge Audit Projects) or you can use the command line using the FPR Utility. The command would be:
FPRUtility -merge -project <primary.fpr> -source <secondary.fpr> -f <output.fpr>

How to increase the log file size threshold for TFS 2015 Build

So those of us using the new TFS build system will undoubtedly have come across this rather annoying message:
Has anyone came across a setting for this and how to eliminate or increase it? In this particular example, my log file is just over 2MB. Of course I can download the logs but I'm happy to wait a few seconds for this to load! I want the errors and warnings highlighted and coloured nicely in my scrollbar.
Unfortunately,there are no such settings.
You can use some useful tools to help you analyze your log. Such as Log Parser , Splunk ... More for your choice :What is the best log analysis tool that you used?

Zip old builds in jenkins plugin

There is some Jenkins plugin to ZIP old builds? I don't want to package just the generated archive (I am deleting those). I want to ZIP just the log data and the data used by tools, like FindBugs, Checkstyle, Surefire, Cobertura, etc.
Currently I am running out of disk space due to Jenkins. There are some build log files that reach 50 Mb due running 3000+ unit tests (most of these are severely broken builds full of stacktraces in which everything fails). But this happens frequently in some of my projects, so I get this for every bad build. Good builds are milder and may get up to around 15 Mb, but that is still a bit costly.
The surefile XML files for these are huge too. Since they tend to contain very repetitive data, I could save a lot of disk space by zipping them. But I know no Jenkins plugin for this.
Note: I am already deleting old builds not needed anymore.
The 'Compress Buildlog' plugin does pretty much exactly what you're asking for, for the logs themselves at least.
https://github.com/daniel-beck/compress-buildlog-plugin
For everything else, you'll probably want an unconditional step after your build completes that manually applies compression to other generated files that will stick around.
The administering Jenkins guide gives some guidance on how to do this manually. There are also links to the following plugins
Shelve Project
thinBackup
The last one is really designed to backup Jenkins configuration, but there are also options for build results.
Although this question is early 3 years ago there may be other people search the same question
here is my answer
If you want to compress the current build job's log using This jenkins plugin
If you want to compress the old jenkins jobs using the following script mtime +5 means the file change time is 5 days ago
cd "$JENKINS_HOME/jobs"
find * -name "log" -mtime +5|xargs gzip -9v '{}'

Resources